|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM377421723 |
003 |
DE-627 |
005 |
20241011232351.0 |
007 |
cr uuu---uuuuu |
008 |
240911s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TVCG.2024.3456213
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1564.xml
|
035 |
|
|
|a (DE-627)NLM377421723
|
035 |
|
|
|a (NLM)39255115
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Song, Wenfeng
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Expressive 3D Facial Animation Generation Based on Local-to-Global Latent Diffusion
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 10.10.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a 3D Facial animations, crucial to augmented and mixed reality digital media, have evolved from mere aesthetic elements to potent storytelling media. Despite considerable progress in facial animation of neutral emotions, existing methods still struggle to capture the authenticity of emotions. This paper introduces a novel approach to capture fine facial expressions and generate facial animations using audio synchronization. Our method consists of two key components: First, the Local-to-global Latent Diffusion Model (LG-LDM) tailored for authentic facial expressions, which can integrate audio, time step, facial expressions, and other conditions towards possible encoding of emotionally rich yet latent features in response to possibly noisy raw audio signals. The core of LG-LDM is our carefully designed Facial Denoiser Model (FDM) for aligning the local-to-global animation feature with audio. Second, we redesign an Emotion-centric Vector Quantized-Variational AutoEncoder framework (EVQ-VAE) to finely decode the subtle differences under different emotions and reconstruct the final 3D facial geometry. Our work significantly contributes to the key challenges of emotionally realistic 3D facial animation for audio synchronization and enhances the immersive experience and emotional depth in augmented and mixed reality applications. We provide a reproducibility kit including our code, dataset, and detailed instructions for running the experiments. This kit is available at https://github.com/wangxuanx/Face-Diffusion-Model
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Wang, Xuan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jiang, Yiming
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Shuai
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Hao, Aimin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Hou, Xia
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Qin, Hong
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on visualization and computer graphics
|d 1996
|g 30(2024), 11 vom: 11. Okt., Seite 7397-7407
|w (DE-627)NLM098269445
|x 1941-0506
|7 nnns
|
773 |
1 |
8 |
|g volume:30
|g year:2024
|g number:11
|g day:11
|g month:10
|g pages:7397-7407
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TVCG.2024.3456213
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 30
|j 2024
|e 11
|b 11
|c 10
|h 7397-7407
|