|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM364831839 |
003 |
DE-627 |
005 |
20231226100310.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2023.3265262
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1216.xml
|
035 |
|
|
|a (DE-627)NLM364831839
|
035 |
|
|
|a (NLM)37991910
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Song, Jingkuan
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Spherical Centralized Quantization for Fast Image Retrieval
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 04.12.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Existing supervised quantization methods usually learn the quantizers from pair-wise, triplet, or anchor-based losses, which only capture their relationship locally without aligning them globally. This may cause an inadequate use of the entire space and a severe intersection among different semantics, leading to inferior retrieval performance. Furthermore, to enable quantizers to learn in an end-to-end way, current practices usually relax the non-differentiable quantization operation by substituting it with softmax, which unfortunately is biased, leading to an unsatisfying suboptimal solution. To address the above issues, we present Spherical Centralized Quantization (SCQ), which contains a Priori Knowledge based Feature (PKFA) module for the global alignment of feature vectors, and an Annealing Regulation Semantic Quantization (ARSQ) module for low-biased optimization. Specifically, the PKFA module first applies Semantic Center Allocation (SCA) to obtain semantic centers based on prior knowledge, and then adopts Centralized Feature Alignment (CFA) to gather feature vectors based on corresponding semantic centers. The SCA and CFA globally optimize the inter-class separability and intra-class compactness, respectively. After that, the ARSQ module performs a partial-soft relaxation to tackle biases, and an Annealing Regulation Quantization loss for further addressing the local optimal solution. Experimental results show that our SCQ outperforms state-of-the-art algorithms by a large margin (2.1%, 3.6%, 5.5% mAP respectively) on CIFAR-10, NUS-WIDE, and ImageNet with a code length of 8 bits. Codes are publicly available:https://github.com/zzb111/Spherical-Centralized-Quantization
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Zhang, Zhibin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhu, Xiaosu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhao, Qike
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Meng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Shen, Heng Tao
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 32(2023) vom: 22., Seite 6485-6499
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:32
|g year:2023
|g day:22
|g pages:6485-6499
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2023.3265262
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 32
|j 2023
|b 22
|h 6485-6499
|