Robust Labeling and Invariance Modeling for Unsupervised Cross-Resolution Person Re-Identification
Cross-resolution person re-identification (CR-ReID) aims to match low-resolution (LR) and high-resolution (HR) images of the same individual. To reduce the cost of manual annotation, existing unsupervised CR-ReID methods typically rely on cross-resolution fusion to obtain pseudo-labels and resolutio...
| Publié dans: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 34(2025) vom: 28., Seite 5557-5569 |
|---|---|
| Auteur principal: | |
| Autres auteurs: | , , , |
| Format: | Article en ligne |
| Langue: | English |
| Publié: |
2025
|
| Accès à la collection: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
| Sujets: | Journal Article |
| Résumé: | Cross-resolution person re-identification (CR-ReID) aims to match low-resolution (LR) and high-resolution (HR) images of the same individual. To reduce the cost of manual annotation, existing unsupervised CR-ReID methods typically rely on cross-resolution fusion to obtain pseudo-labels and resolution-invariant features. However, the fusion process requires two encoders and a fusion module, which significantly increases computational complexity and reduces efficiency. To address this issue, we propose a robust labeling and invariance modeling (RLIM) framework, which utilizes a single encoder to tackle the unsupervised CR-ReID problem. To obtain pseudo-labels robust to resolution gaps, we develop cross-resolution robust labeling (CRL), which utilizes two clustering criteria to encourage cross-resolution positive pairs to cluster together and exploit the reliable relationships between images. We also introduce random texture augmentation (TexA) to enhance the model's robustness to noisy textures related to artifacts and backgrounds by randomly adjusting texture strength. During the optimization process, we introduce the resolution-cluster consistency loss, which promotes resolution-invariant feature learning by aligning inter-resolution distances with intra-cluster distances. Experimental results on multiple datasets demonstrate that RLIM not only surpasses existing unsupervised methods, but also achieves performance close to some supervised CR-ReID methods. Code is available at https://github.com/zqpang/RLIM |
|---|---|
| Description: | Date Revised 10.09.2025 published: Print Citation Status PubMed-not-MEDLINE |
| ISSN: | 1941-0042 |
| DOI: | 10.1109/TIP.2025.3601443 |