Complementary Data Augmentation for Cloth-Changing Person Re-Identification

This paper studies the challenging person re-identification (Re-ID) task under the cloth-changing scenario, where the same identity (ID) suffers from uncertain cloth changes. To learn cloth- and ID-invariant features, it is crucial to collect abundant training data with varying clothes, which is dif...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 01., Seite 4227-4239
1. Verfasser: Jia, Xuemei (VerfasserIn)
Weitere Verfasser: Zhong, Xian, Ye, Mang, Liu, Wenxuan, Huang, Wenxin
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:This paper studies the challenging person re-identification (Re-ID) task under the cloth-changing scenario, where the same identity (ID) suffers from uncertain cloth changes. To learn cloth- and ID-invariant features, it is crucial to collect abundant training data with varying clothes, which is difficult in practice. To alleviate the reliance on rich data collection, we reinforce the feature learning process by designing powerful complementary data augmentation strategies, including positive and negative data augmentation. Specifically, the positive augmentation fulfills the ID space by randomly patching the person images with different clothes, simulating rich appearance to enhance the robustness against clothes variations. For negative augmentation, its basic idea is to randomly generate out-of-distribution synthetic samples by combining various appearance and posture factors from real samples. The designed strategies seamlessly reinforce the feature learning without additional information introduction. Extensive experiments conducted on both cloth-changing and -unchanging tasks demonstrate the superiority of our proposed method, consistently improving the accuracy over various baselines
Beschreibung:Date Completed 01.07.2022
Date Revised 01.07.2022
published: Print-Electronic
Citation Status MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2022.3183469