TalkingStyle : Personalized Speech-Driven 3D Facial Animation with Style Preservation

It is a challenging task to create realistic 3D avatars that accurately replicate individuals' speech and unique talking styles for speech-driven facial animation. Existing techniques have made remarkable progress but still struggle to achieve lifelike mimicry. This paper proposes "Talking...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2024) vom: 11. Juni
1. Verfasser: Song, Wenfeng (VerfasserIn)
Weitere Verfasser: Wang, Xuan, Zheng, Shi, Li, Shuai, Hao, Aimin, Hou, Xia
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM373495978
003 DE-627
005 20241107232052.0
007 cr uuu---uuuuu
008 240612s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2024.3409568  |2 doi 
028 5 2 |a pubmed24n1593.xml 
035 |a (DE-627)NLM373495978 
035 |a (NLM)38861445 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Song, Wenfeng  |e verfasserin  |4 aut 
245 1 0 |a TalkingStyle  |b Personalized Speech-Driven 3D Facial Animation with Style Preservation 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 07.11.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a It is a challenging task to create realistic 3D avatars that accurately replicate individuals' speech and unique talking styles for speech-driven facial animation. Existing techniques have made remarkable progress but still struggle to achieve lifelike mimicry. This paper proposes "TalkingStyle", a novel method to generate personalized talking avatars while retaining the talking style of the person. Our approach uses a set of audio and animation samples from an individual to create new facial animations that closely resemble their specific talking style, synchronized with speech. We disentangle the style codes from the motion patterns, allowing our method to associate a distinct identifier with each person. To manage each aspect effectively, we employ three separate encoders for style, speech, and motion, ensuring the preservation of the original style while maintaining consistent motion in our stylized talking avatars. Additionally, we propose a new style-conditioned transformer decoder, offering greater flexibility and control over the facial avatar styles. We comprehensively evaluate TalkingStyle through qualitative and quantitative assessments, as well as user studies demonstrating its superior realism and lip synchronization accuracy compared to current state-of-the-art methods. To promote transparency and further advancements in the field, we also make the source code publicly available at https://github.com/wangxuanx/TalkingStyle 
650 4 |a Journal Article 
700 1 |a Wang, Xuan  |e verfasserin  |4 aut 
700 1 |a Zheng, Shi  |e verfasserin  |4 aut 
700 1 |a Li, Shuai  |e verfasserin  |4 aut 
700 1 |a Hao, Aimin  |e verfasserin  |4 aut 
700 1 |a Hou, Xia  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g PP(2024) vom: 11. Juni  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:11  |g month:06 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2024.3409568  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 11  |c 06