Semantically Disentangled Variational Autoencoder for Modeling 3D Facial Details

Parametric face models, such as morphable and blendshape models, have shown great potential in face representation, reconstruction, and animation. However, all these models focus on large-scale facial geometry. Facial details such as wrinkles are not parameterized in these models, impeding accuracy...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 29(2023), 8 vom: 12. Aug., Seite 3630-3641
1. Verfasser: Ling, Jingwang (VerfasserIn)
Weitere Verfasser: Wang, Zhibo, Lu, Ming, Wang, Quan, Qian, Chen, Xu, Feng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Parametric face models, such as morphable and blendshape models, have shown great potential in face representation, reconstruction, and animation. However, all these models focus on large-scale facial geometry. Facial details such as wrinkles are not parameterized in these models, impeding accuracy and realism. In this article, we propose a method to learn a Semantically Disentangled Variational Autoencoder (SDVAE) to parameterize facial details and support independent detail manipulation as an extension of an off-the-shelf large-scale face model. Our method utilizes the non-linear capability of Deep Neural Networks for detail modeling, achieving better accuracy and greater representation power compared with linear models. In order to disentangle the semantic factors of identity, expression and age, we propose to eliminate the correlation between different factors in an adversarial manner. Therefore, wrinkle-level details of various identities, expressions, and ages can be generated and independently controlled by changing latent vectors of our SDVAE. We further leverage our model to reconstruct 3D faces via fitting to facial scans and images. Benefiting from our parametric model, we achieve accurate and robust reconstruction, and the reconstructed details can be easily animated and manipulated. We evaluate our method on practical applications, including scan fitting, image fitting, video tracking, model manipulation, and expression and age animation. Extensive experiments demonstrate that the proposed method can robustly model facial details and achieve better results than alternative methods
Beschreibung:Date Completed 03.07.2023
Date Revised 03.07.2023
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0506
DOI:10.1109/TVCG.2022.3166666