|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM271604654 |
003 |
DE-627 |
005 |
20231224233047.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2017 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2017.2700762
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0905.xml
|
035 |
|
|
|a (DE-627)NLM271604654
|
035 |
|
|
|a (NLM)28475058
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Hao Liu
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a End-to-End Comparative Attention Networks for Person Re-Identification
|
264 |
|
1 |
|c 2017
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 11.12.2018
|
500 |
|
|
|a Date Revised 11.12.2018
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, i.e., the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively comparing their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Jiashi Feng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Meibin Qi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jianguo Jiang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Shuicheng Yan
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 26(2017), 7 vom: 05. Juli, Seite 3492-3506
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:26
|g year:2017
|g number:7
|g day:05
|g month:07
|g pages:3492-3506
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2017.2700762
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 26
|j 2017
|e 7
|b 05
|c 07
|h 3492-3506
|