Learning Invariance From Generated Variance for Unsupervised Person Re-Identification

This work focuses on unsupervised representation learning in person re-identification (ReID). Recent self-supervised contrastive learning methods learn invariance by maximizing the representation similarity between two augmented views of a same image. However, traditional data augmentation may bring...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 6 vom: 05. Juni, Seite 7494-7508
1. Verfasser: Chen, Hao (VerfasserIn)
Weitere Verfasser: Wang, Yaohui, Lagadec, Benoit, Dantcheva, Antitza, Bremond, Francois
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM355203162
003 DE-627
005 20231226063840.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2022.3226866  |2 doi 
028 5 2 |a pubmed24n1183.xml 
035 |a (DE-627)NLM355203162 
035 |a (NLM)37015570 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Chen, Hao  |e verfasserin  |4 aut 
245 1 0 |a Learning Invariance From Generated Variance for Unsupervised Person Re-Identification 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 07.05.2023 
500 |a Date Revised 07.05.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a This work focuses on unsupervised representation learning in person re-identification (ReID). Recent self-supervised contrastive learning methods learn invariance by maximizing the representation similarity between two augmented views of a same image. However, traditional data augmentation may bring to the fore undesirable distortions on identity features, which is not always favorable in id-sensitive ReID tasks. In this article, we propose to replace traditional data augmentation with a generative adversarial network (GAN) that is targeted to generate augmented views for contrastive learning. A 3D mesh guided person image generator is proposed to disentangle a person image into id-related and id-unrelated features. Deviating from previous GAN-based ReID methods that only work in id-unrelated space (pose and camera style), we conduct GAN-based augmentation on both id-unrelated and id-related features. We further propose specific contrastive losses to help our network learn invariance from id-unrelated and id-related augmentations. By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised person ReID performance on mainstream large-scale benchmarks 
650 4 |a Journal Article 
700 1 |a Wang, Yaohui  |e verfasserin  |4 aut 
700 1 |a Lagadec, Benoit  |e verfasserin  |4 aut 
700 1 |a Dantcheva, Antitza  |e verfasserin  |4 aut 
700 1 |a Bremond, Francois  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 45(2023), 6 vom: 05. Juni, Seite 7494-7508  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:45  |g year:2023  |g number:6  |g day:05  |g month:06  |g pages:7494-7508 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2022.3226866  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 45  |j 2023  |e 6  |b 05  |c 06  |h 7494-7508