Cross-Layer Contrastive Learning of Latent Semantics for Facial Expression Recognition

Convolutional neural networks (CNNs) have achieved significant improvement for the task of facial expression recognition. However, current training still suffers from the inconsistent learning intensities among different layers, i.e., the feature representations in the shallow layers are not suffici...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 26., Seite 2514-2529
1. Verfasser: Xie, Weicheng (VerfasserIn)
Weitere Verfasser: Peng, Zhibin, Shen, Linlin, Lu, Wenya, Zhang, Yang, Song, Siyang
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM370202856
003 DE-627
005 20250306000049.0
007 cr uuu---uuuuu
008 240328s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2024.3378459  |2 doi 
028 5 2 |a pubmed25n1233.xml 
035 |a (DE-627)NLM370202856 
035 |a (NLM)38530732 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Xie, Weicheng  |e verfasserin  |4 aut 
245 1 0 |a Cross-Layer Contrastive Learning of Latent Semantics for Facial Expression Recognition 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 03.04.2024 
500 |a Date Revised 03.04.2024 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Convolutional neural networks (CNNs) have achieved significant improvement for the task of facial expression recognition. However, current training still suffers from the inconsistent learning intensities among different layers, i.e., the feature representations in the shallow layers are not sufficiently learned compared with those in deep layers. To this end, this work proposes a contrastive learning framework to align the feature semantics of shallow and deep layers, followed by an attention module for representing the multi-scale features in the weight-adaptive manner. The proposed algorithm has three main merits. First, the learning intensity, defined as the magnitude of the backpropagation gradient, of the features on the shallow layer is enhanced by cross-layer contrastive learning. Second, the latent semantics in the shallow-layer and deep-layer features are explored and aligned in the contrastive learning, and thus the fine-grained characteristics of expressions can be taken into account for the feature representation learning. Third, by integrating the multi-scale features from multiple layers with an attention module, our algorithm achieved the state-of-the-art performances, i.e. 92.21%, 89.50%, 62.82%, on three in-the-wild expression databases, i.e. RAF-DB, FERPlus, SFEW, and the second best performance, i.e. 65.29% on AffectNet dataset. Our codes will be made publicly available 
650 4 |a Journal Article 
700 1 |a Peng, Zhibin  |e verfasserin  |4 aut 
700 1 |a Shen, Linlin  |e verfasserin  |4 aut 
700 1 |a Lu, Wenya  |e verfasserin  |4 aut 
700 1 |a Zhang, Yang  |e verfasserin  |4 aut 
700 1 |a Song, Siyang  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 33(2024) vom: 26., Seite 2514-2529  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnas 
773 1 8 |g volume:33  |g year:2024  |g day:26  |g pages:2514-2529 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2024.3378459  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 33  |j 2024  |b 26  |h 2514-2529