Facial Action Unit Recognition and Intensity Estimation Enhanced through Label Dependencies

The inherent dependencies among facial action units (AU) caused by the underlying anatomic mechanism are essential for the proper recognition of AUs and estimation of intensity levels, but they have not been exploited to their full potential. We are proposing novel methods to recognize AUs and estim...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2018) vom: 26. Okt.
1. Verfasser: Wang, Shangfei (VerfasserIn)
Weitere Verfasser: Hao, Longfei, Ji, Qiang
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2018
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:The inherent dependencies among facial action units (AU) caused by the underlying anatomic mechanism are essential for the proper recognition of AUs and estimation of intensity levels, but they have not been exploited to their full potential. We are proposing novel methods to recognize AUs and estimate intensity via hybrid Bayesian networks. The upper two layers are latent regression Bayesian networks (LRBNs), and the lower layers are Bayesian networks (BNs). The visible nodes of the LRBN layers are representations of ground-truth AU occurrences or AU intensities. Through the directed connections from latent layer and visible layer, an LRBN can successfully represent relationships between multiple AUs or AU intensities. The lower layers include Bayesian networks with two nodes for AU recognition, and Bayesian networks with three nodes for AU intensity estimation. The bottom layers incorporate measurements from facial images with AU dependencies for intensity estimation and AU recognition. Efficient learning algorithms of the hybrid Bayesian networks are proposed for AU recognition as well as intensity estimation. Furthermore, the proposed hybrid Bayesian network models are extended for facial expression-assisted AU recognition and intensity estimation, as AU relationships are closely related to facial expressions. We test our methods on three benchmark databases for AU recognition and two benchmark databases for intensity estimation. The results demonstrate that the proposed approaches faithfully model the complex and global inherent AU dependencies, and the expression labels available only during training can boost the estimation of AU dependencies for both AU recognition and intensity estimation
Beschreibung:Date Revised 27.02.2024
published: Print-Electronic
Citation Status Publisher
ISSN:1941-0042
DOI:10.1109/TIP.2018.2878339