Mind the Gap : Learning Modality-Agnostic Representations With a Cross-Modality UNet

Cross-modality recognition has many important applications in science, law enforcement and entertainment. Popular methods to bridge the modality gap include reducing the distributional differences of representations of different modalities, learning indistinguishable representations or explicit moda...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 07., Seite 655-670
1. Verfasser: Niu, Xin (VerfasserIn)
Weitere Verfasser: Li, Enyi, Liu, Jinchao, Wang, Yan, Osadchy, Margarita, Fang, Yongchun
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM366812793
003 DE-627
005 20240114234843.0
007 cr uuu---uuuuu
008 240114s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2023.3348656  |2 doi 
028 5 2 |a pubmed24n1258.xml 
035 |a (DE-627)NLM366812793 
035 |a (NLM)38190674 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Niu, Xin  |e verfasserin  |4 aut 
245 1 0 |a Mind the Gap  |b Learning Modality-Agnostic Representations With a Cross-Modality UNet 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 12.01.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Cross-modality recognition has many important applications in science, law enforcement and entertainment. Popular methods to bridge the modality gap include reducing the distributional differences of representations of different modalities, learning indistinguishable representations or explicit modality transfer. The first two approaches suffer from the loss of discriminant information while removing the modality-specific variations. The third one heavily relies on the successful modality transfer, could face catastrophic performance drop when explicit modality transfers are not possible or difficult. To tackle this problem, we proposed a compact encoder-decoder neural module (cmUNet) to learn modality-agnostic representations while retaining identity-related information. This is achieved through cross-modality transformation and in-modality reconstruction, enhanced by an adversarial/perceptual loss which encourages indistinguishability of representations in the original sample space. For cross-modality matching, we propose MarrNet where cmUNet is connected to a standard feature extraction network which takes as inputs the modality-agnostic representations and outputs similarity scores for matching. We validated our method on five challenging tasks, namely Raman-infrared spectrum matching, cross-modality person re-identification and heterogeneous (photo-sketch, visible-near infrared and visible-thermal) face recognition, where MarrNet showed superior performance compared to state-of-the-art methods. Furthermore, it is observed that a cross-modality matching method could be biased to extract discriminant information from partial or even wrong regions, due to incompetence of dealing with modality gaps, which subsequently leads to poor generalization. We show that robustness to occlusions can be an indicator of whether a method can well bridge the modality gap. This, to our knowledge, has been largely neglected in the previous works. Our experiments demonstrated that MarrNet exhibited excellent robustness against disguises and occlusions, and outperformed existing methods with a large margin (>10%). The proposed cmUNet is a meta-approach and can be used as a building block for various applications 
650 4 |a Journal Article 
700 1 |a Li, Enyi  |e verfasserin  |4 aut 
700 1 |a Liu, Jinchao  |e verfasserin  |4 aut 
700 1 |a Wang, Yan  |e verfasserin  |4 aut 
700 1 |a Osadchy, Margarita  |e verfasserin  |4 aut 
700 1 |a Fang, Yongchun  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 33(2024) vom: 07., Seite 655-670  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:33  |g year:2024  |g day:07  |g pages:655-670 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2023.3348656  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 33  |j 2024  |b 07  |h 655-670