Expressive 3D Facial Animation Generation Based on Local-to-Global Latent Diffusion

3D Facial animations, crucial to augmented and mixed reality digital media, have evolved from mere aesthetic elements to potent storytelling media. Despite considerable progress in facial animation of neutral emotions, existing methods still struggle to capture the authenticity of emotions. This pap...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 30(2024), 11 vom: 11. Okt., Seite 7397-7407
1. Verfasser: Song, Wenfeng (VerfasserIn)
Weitere Verfasser: Wang, Xuan, Jiang, Yiming, Li, Shuai, Hao, Aimin, Hou, Xia, Qin, Hong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:3D Facial animations, crucial to augmented and mixed reality digital media, have evolved from mere aesthetic elements to potent storytelling media. Despite considerable progress in facial animation of neutral emotions, existing methods still struggle to capture the authenticity of emotions. This paper introduces a novel approach to capture fine facial expressions and generate facial animations using audio synchronization. Our method consists of two key components: First, the Local-to-global Latent Diffusion Model (LG-LDM) tailored for authentic facial expressions, which can integrate audio, time step, facial expressions, and other conditions towards possible encoding of emotionally rich yet latent features in response to possibly noisy raw audio signals. The core of LG-LDM is our carefully designed Facial Denoiser Model (FDM) for aligning the local-to-global animation feature with audio. Second, we redesign an Emotion-centric Vector Quantized-Variational AutoEncoder framework (EVQ-VAE) to finely decode the subtle differences under different emotions and reconstruct the final 3D facial geometry. Our work significantly contributes to the key challenges of emotionally realistic 3D facial animation for audio synchronization and enhances the immersive experience and emotional depth in augmented and mixed reality applications. We provide a reproducibility kit including our code, dataset, and detailed instructions for running the experiments. This kit is available at https://github.com/wangxuanx/Face-Diffusion-Model
Beschreibung:Date Revised 10.10.2024
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0506
DOI:10.1109/TVCG.2024.3456213