Multi-Biometric Unified Network for Cloth-Changing Person Re-Identification

Person re-identification (re-ID) aims to match the same person across different cameras. However, most existing re-ID methods assume that people wear the same clothes in different views, which limit their performance in identifying target pedestrians who change clothes. Cloth-changing re-ID is a qui...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 32(2023) vom: 15., Seite 4555-4566
1. Verfasser: Zhang, Guoqing (VerfasserIn)
Weitere Verfasser: Liu, Jie, Chen, Yuhao, Zheng, Yuhui, Zhang, Hongwei
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Person re-identification (re-ID) aims to match the same person across different cameras. However, most existing re-ID methods assume that people wear the same clothes in different views, which limit their performance in identifying target pedestrians who change clothes. Cloth-changing re-ID is a quite challenging problem as clothes occupying a large number of pixels in an image becomes invalid or even misleads information. To tackle this problem, we propose a novel Multi-biometric Unified Network (MBUNet) for learning the robustness of cloth-changing re-ID model by exploiting clothing-independent cues. Specifically, we first introduce a multi-biological feature branch to extract a variety of biological features, such as the head, neck, and shoulders to resist cloth-changing. Then, a differential feature attention module (DFAM) is embedded in this branch, which can extract discriminative fine-grained biological features. Besides, we design a differential recombination on max pooling (DRMP) strategy and simultaneously apply a direction-adaptive graph convolutional layer to mine more robust global and pose features. Finally, we propose a Lightweight Domain Adaptation Module (LDAM) that combines the attention mechanism before and after the waveblock to capture and enhance transferable features across scenarios. To further improve the performance of the model, we also integrate mAP optimization into the objective function of our model for joint training to solve the discrete optimization problem of mAP. Extensive experiments on five cloth-changing re-ID datasets demonstrate the advantages of our proposed MBUNet. The code is available at https://github.com/liyeabc/MBUNet
Beschreibung:Date Revised 16.08.2023
published: Print
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2023.3279673