Confidence Preserving Machine for Facial Action Unit Detection

Facial action unit (AU) detection from video has been a long-standing problem in the automated facial expression analysis. While progress has been made, accurate detection of facial AUs remains challenging due to ubiquitous sources of errors, such as inter-personal variability, pose, and low-intensi...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 25(2016), 10 vom: 09. Okt., Seite 4753-4767
1. Verfasser: Jiabei Zeng (VerfasserIn)
Weitere Verfasser: Wen-Sheng Chu, De la Torre, Fernando, Cohn, Jeffrey F, Zhang Xiong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2016
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM263014495
003 DE-627
005 20231224202941.0
007 cr uuu---uuuuu
008 231224s2016 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2016.2594486  |2 doi 
028 5 2 |a pubmed24n0876.xml 
035 |a (DE-627)NLM263014495 
035 |a (NLM)27479964 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Jiabei Zeng  |e verfasserin  |4 aut 
245 1 0 |a Confidence Preserving Machine for Facial Action Unit Detection 
264 1 |c 2016 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 09.01.2021 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Facial action unit (AU) detection from video has been a long-standing problem in the automated facial expression analysis. While progress has been made, accurate detection of facial AUs remains challenging due to ubiquitous sources of errors, such as inter-personal variability, pose, and low-intensity AUs. In this paper, we refer to samples causing such errors as hard samples, and the remaining as easy samples. To address learning with the hard samples, we propose the confidence preserving machine (CPM), a novel two-stage learning framework that combines multiple classifiers following an "easy-to-hard" strategy. During the training stage, CPM learns two confident classifiers. Each classifier focuses on separating easy samples of one class from all else, and thus preserves confidence on predicting each class. During the test stage, the confident classifiers provide "virtual labels" for easy test samples. Given the virtual labels, we propose a quasi-semi-supervised (QSS) learning strategy to learn a person-specific classifier. The QSS strategy employs a spatio-temporal smoothness that encourages similar predictions for samples within a spatio-temporal neighborhood. In addition, to further improve detection performance, we introduce two CPM extensions: iterative CPM that iteratively augments training samples to train the confident classifiers, and kernel CPM that kernelizes the original CPM model to promote nonlinearity. Experiments on four spontaneous data sets GFT, BP4D, DISFA, and RU-FACS illustrate the benefits of the proposed CPM models over baseline methods and the state-of-the-art semi-supervised learning and transfer learning methods 
650 4 |a Journal Article 
700 1 |a Wen-Sheng Chu  |e verfasserin  |4 aut 
700 1 |a De la Torre, Fernando  |e verfasserin  |4 aut 
700 1 |a Cohn, Jeffrey F  |e verfasserin  |4 aut 
700 1 |a Zhang Xiong  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 25(2016), 10 vom: 09. Okt., Seite 4753-4767  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:25  |g year:2016  |g number:10  |g day:09  |g month:10  |g pages:4753-4767 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2016.2594486  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 25  |j 2016  |e 10  |b 09  |c 10  |h 4753-4767