|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM31836705X |
003 |
DE-627 |
005 |
20231225165216.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2020.3042193
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1061.xml
|
035 |
|
|
|a (DE-627)NLM31836705X
|
035 |
|
|
|a (NLM)33270558
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Lin, Mingbao
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Fast Class-Wise Updating for Online Hashing
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 04.04.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Online image hashing has received increasing research attention recently, which processes large-scale data in a streaming fashion to update the hash functions on-the-fly. To this end, most existing works exploit this problem under a supervised setting, i.e., using class labels to boost the hashing performance, which suffers from the defects in both adaptivity and efficiency: First, large amounts of training batches are required to learn up-to-date hash functions, which leads to poor online adaptivity. Second, the training is time-consuming, which contradicts with the core need of online learning. In this paper, a novel supervised online hashing scheme, termed Fast Class-wise Updating for Online Hashing (FCOH), is proposed to address the above two challenges by introducing a novel and efficient inner product operation. To achieve fast online adaptivity, a class-wise updating method is developed to decompose the binary code learning and alternatively renew the hash functions in a class-wise fashion, which well addresses the burden on large amounts of training batches. Quantitatively, such a decomposition further leads to at least 75 percent storage saving. To further achieve online efficiency, we propose a semi-relaxation optimization, which accelerates the online training by treating different binary constraints independently. Without additional constraints and variables, the time complexity is significantly reduced. Such a scheme is also quantitatively shown to well preserve past information during updating hashing functions. We have quantitatively demonstrated that the collective effort of class-wise updating and semi-relaxation optimization provides a superior performance comparing to various state-of-the-art methods, which is verified through extensive experiments on three widely-used datasets
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Ji, Rongrong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Sun, Xiaoshuai
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Baochang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Huang, Feiyue
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tian, Yonghong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tao, Dacheng
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 44(2022), 5 vom: 03. Mai, Seite 2453-2467
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:44
|g year:2022
|g number:5
|g day:03
|g month:05
|g pages:2453-2467
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2020.3042193
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 44
|j 2022
|e 5
|b 03
|c 05
|h 2453-2467
|