Visibility Constrained Generative Model for Depth-Based 3D Facial Pose Tracking

In this paper, we propose a generative framework that unifies depth-based 3D facial pose tracking and face model adaptation on-the-fly, in the unconstrained scenarios with heavy occlusions and arbitrary facial expression variations. Specifically, we introduce a statistical 3D morphable model that fl...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 41(2019), 8 vom: 29. Aug., Seite 1994-2007
1. Verfasser: Sheng, Lu (VerfasserIn)
Weitere Verfasser: Cai, Jianfei, Cham, Tat-Jen, Pavlovic, Vladimir, Ngan, King Ngi
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM29003423X
003 DE-627
005 20231225063843.0
007 cr uuu---uuuuu
008 231225s2019 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2018.2877675  |2 doi 
028 5 2 |a pubmed24n0966.xml 
035 |a (DE-627)NLM29003423X 
035 |a (NLM)30369437 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Sheng, Lu  |e verfasserin  |4 aut 
245 1 0 |a Visibility Constrained Generative Model for Depth-Based 3D Facial Pose Tracking 
264 1 |c 2019 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 22.08.2019 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a In this paper, we propose a generative framework that unifies depth-based 3D facial pose tracking and face model adaptation on-the-fly, in the unconstrained scenarios with heavy occlusions and arbitrary facial expression variations. Specifically, we introduce a statistical 3D morphable model that flexibly describes the distribution of points on the surface of the face model, with an efficient switchable online adaptation that gradually captures the identity of the tracked subject and rapidly constructs a suitable face model when the subject changes. Moreover, unlike prior art that employed ICP-based facial pose estimation, to improve robustness to occlusions, we propose a ray visibility constraint that regularizes the pose based on the face model's visibility with respect to the input point cloud. Ablation studies and experimental results on Biwi and ICT-3DHP datasets demonstrate that the proposed framework is effective and outperforms completing state-of-the-art depth-based methods 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Cai, Jianfei  |e verfasserin  |4 aut 
700 1 |a Cham, Tat-Jen  |e verfasserin  |4 aut 
700 1 |a Pavlovic, Vladimir  |e verfasserin  |4 aut 
700 1 |a Ngan, King Ngi  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 41(2019), 8 vom: 29. Aug., Seite 1994-2007  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:41  |g year:2019  |g number:8  |g day:29  |g month:08  |g pages:1994-2007 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2018.2877675  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 41  |j 2019  |e 8  |b 29  |c 08  |h 1994-2007