|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM269020373 |
003 |
DE-627 |
005 |
20231224223744.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2018 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2017.2666805
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0896.xml
|
035 |
|
|
|a (DE-627)NLM269020373
|
035 |
|
|
|a (NLM)28207383
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Chen, Ying-Cong
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Person Re-Identification by Camera Correlation Aware Feature Augmentation
|
264 |
|
1 |
|c 2018
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 30.01.2019
|
500 |
|
|
|a Date Revised 30.01.2019
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a The challenge of person re-identification (re-id) is to match individual images of the same person captured by different non-overlapping camera views against significant and unknown cross-view feature distortion. While a large number of distance metric/subspace learning models have been developed for re-id, the cross-view transformations they learned are view-generic and thus potentially less effective in quantifying the feature distortion inherent to each camera view. Learning view-specific feature transformations for re-id (i.e., view-specific re-id), an under-studied approach, becomes an alternative resort for this problem. In this work, we formulate a novel view-specific person re-identification framework from the feature augmentation point of view, called Camera coR relation Aware Feature augmenTation (CRAFT). Specifically, CRAFT performs cross-view adaptation by automatically measuring camera correlation from cross-view visual data distribution and adaptively conducting feature augmentation to transform the original features into a new adaptive space. Through our augmentation framework, view-generic learning algorithms can be readily generalized to learn and optimize view-specific sub-models whilst simultaneously modelling view-generic discrimination information. Therefore, our framework not only inherits the strength of view-generic model learning but also provides an effective way to take into account view specific characteristics. Our CRAFT framework can be extended to jointly learn view-specific feature transformations for person re-id across a large network with more than two cameras, a largely under-investigated but realistic re-id setting. Additionally, we present a domain-generic deep person appearance representation which is designed particularly to be towards view invariant for facilitating cross-view adaptation by CRAFT. We conducted extensively comparative experiments to validate the superiority and advantages of our proposed framework over state-of-the-art competitors on contemporary challenging person re-id datasets
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Zhu, Xiatian
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zheng, Wei-Shi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lai, Jian-Huang
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 40(2018), 2 vom: 16. Feb., Seite 392-408
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:40
|g year:2018
|g number:2
|g day:16
|g month:02
|g pages:392-408
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2017.2666805
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 40
|j 2018
|e 2
|b 16
|c 02
|h 392-408
|