|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM30778309X |
003 |
DE-627 |
005 |
20240229162657.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2020 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2020.2971105
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1308.xml
|
035 |
|
|
|a (DE-627)NLM30778309X
|
035 |
|
|
|a (NLM)32191885
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Jin, Sheng
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Deep Saliency Hashing for Fine-grained Retrieval
|
264 |
|
1 |
|c 2020
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 27.02.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status Publisher
|
520 |
|
|
|a In recent years, hashing methods have been proved to be effective and efficient for large-scale Web media search. However, the existing general hashing methods have limited discriminative power for describing fine-grained objects that share similar overall appearance but have a subtle difference. To solve this problem, we for the first time introduce the attention mechanism to the learning of fine-grained hashing codes. Specifically, we propose a novel deep hashing model, named deep saliency hashing (DSaH), which automatically mines salient regions and learns semantic-preserving hashing codes simultaneously. DSaH is a two-step end-to-end model consisting of an attention network and a hashing network. Our loss function contains three basic components, including the semantic loss, the saliency loss, and the quantization loss. As the core of DSaH, the saliency loss guides the attention network to mine discriminative regions from pairs of images.We conduct extensive experiments on both fine-grained and general retrieval datasets for performance evaluation. Experimental results on fine-grained datasets, including Oxford Flowers, Stanford Dogs, and CUB Birds demonstrate that our DSaH performs the best for the fine-grained retrieval task and beats the strongest competitor (DTQ) by approximately 10% on both Stanford Dogs and CUB Birds. DSaH is also comparable to several state-of-the-art hashing methods on CIFAR-10 and NUS-WIDE
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Yao, Hongxun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Sun, Xiaoshuai
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhou, Shangchen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Lei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Hua, Xiansheng
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g (2020) vom: 16. März
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g year:2020
|g day:16
|g month:03
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2020.2971105
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|j 2020
|b 16
|c 03
|