|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM274239019 |
003 |
DE-627 |
005 |
20231225003035.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2017 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2017.2729896
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0914.xml
|
035 |
|
|
|a (DE-627)NLM274239019
|
035 |
|
|
|a (NLM)28749350
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Xianglong Liu
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search
|
264 |
|
1 |
|c 2017
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 30.07.2018
|
500 |
|
|
|a Date Revised 30.07.2018
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Zhujin Li
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Cheng Deng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Dacheng Tao
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 26(2017), 11 vom: 27. Nov., Seite 5324-5336
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:26
|g year:2017
|g number:11
|g day:27
|g month:11
|g pages:5324-5336
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2017.2729896
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 26
|j 2017
|e 11
|b 27
|c 11
|h 5324-5336
|