Human Co-Parsing Guided Alignment for Occluded Person Re-identification

Occluded person re-identification (ReID) is a challenging task due to more background noises and incomplete foreground information. Although existing human parsing-based ReID methods can tackle this problem with semantic alignment at the finest pixel level, their performance is heavily affected by t...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - PP(2022) vom: 20. Dez.
1. Verfasser: Dou, Shuguang (VerfasserIn)
Weitere Verfasser: Zhao, Cairong, Jiang, Xinyang, Zhang, Shanshan, Zheng, Wei-Shi, Zuo, Wangmeng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM355201798
003 DE-627
005 20231226063839.0
007 cr uuu---uuuuu
008 231226s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2022.3229639  |2 doi 
028 5 2 |a pubmed24n1183.xml 
035 |a (DE-627)NLM355201798 
035 |a (NLM)37015433 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Dou, Shuguang  |e verfasserin  |4 aut 
245 1 0 |a Human Co-Parsing Guided Alignment for Occluded Person Re-identification 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 04.04.2023 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a Occluded person re-identification (ReID) is a challenging task due to more background noises and incomplete foreground information. Although existing human parsing-based ReID methods can tackle this problem with semantic alignment at the finest pixel level, their performance is heavily affected by the human parsing model. Most supervised methods propose to train an extra human parsing model aside from the ReID model with cross-domain human parts annotation, suffering from expensive annotation cost and domain gap; Unsupervised methods integrate a feature clustering-based human parsing process into the ReID model, but lacking supervision signals brings less satisfactory segmentation results. In this paper, we argue that the pre-existing information in the ReID training dataset can be directly used as supervision signals to train the human parsing model without any extra annotation. By integrating a weakly supervised human co-parsing network into the ReID network, we propose a novel framework that exploits shared information across different images of the same pedestrian, called the Human Co-parsing Guided Alignment (HCGA) framework. Specifically, the human co-parsing network is weakly supervised by three consistency criteria, namely global semantics, local space, and background. By feeding the semantic information and deep features from the person ReID network into the guided alignment module, features of the foreground and human parts can then be obtained for effective occluded person ReID. Experiment results on two occluded and two holistic datasets demonstrate the superiority of our method. Especially on Occluded-DukeMTMC, it achieves 70.2% Rank-1 accuracy and 57.5% mAP 
650 4 |a Journal Article 
700 1 |a Zhao, Cairong  |e verfasserin  |4 aut 
700 1 |a Jiang, Xinyang  |e verfasserin  |4 aut 
700 1 |a Zhang, Shanshan  |e verfasserin  |4 aut 
700 1 |a Zheng, Wei-Shi  |e verfasserin  |4 aut 
700 1 |a Zuo, Wangmeng  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g PP(2022) vom: 20. Dez.  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:PP  |g year:2022  |g day:20  |g month:12 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2022.3229639  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2022  |b 20  |c 12