Emotional Voice Puppetry

The paper presents emotional voice puppetry, an audio-based facial animation approach to portray characters with vivid emotional changes. The lips motion and the surrounding facial areas are controlled by the contents of the audio, and the facial dynamics are established by category of the emotion a...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2023) vom: 22. Feb.
1. Verfasser: Pan, Ye (VerfasserIn)
Weitere Verfasser: Zhang, Ruisi, Cheng, Shengran, Tan, Shuai, Ding, Yu, Mitchell, Kenny, Yang, Xubo
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM355323788
003 DE-627
005 20231226064115.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2023.3247101  |2 doi 
028 5 2 |a pubmed24n1184.xml 
035 |a (DE-627)NLM355323788 
035 |a (NLM)37027720 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Pan, Ye  |e verfasserin  |4 aut 
245 1 0 |a Emotional Voice Puppetry 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 07.04.2023 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a The paper presents emotional voice puppetry, an audio-based facial animation approach to portray characters with vivid emotional changes. The lips motion and the surrounding facial areas are controlled by the contents of the audio, and the facial dynamics are established by category of the emotion and the intensity. Our approach is exclusive because it takes account of perceptual validity and geometry instead of pure geometric processes. Another highlight of our approach is the generalizability to multiple characters. The findings showed that training new secondary characters when the rig parameters are categorized as eye, eyebrows, nose, mouth, and signature wrinkles is significant in achieving better generalization results compared to joint training. User studies demonstrate the effectiveness of our approach both qualitatively and quantitatively. Our approach can be applicable in AR/VR and 3DUI, namely, virtual reality avatars/self-avatars, teleconferencing and in-game dialogue 
650 4 |a Journal Article 
700 1 |a Zhang, Ruisi  |e verfasserin  |4 aut 
700 1 |a Cheng, Shengran  |e verfasserin  |4 aut 
700 1 |a Tan, Shuai  |e verfasserin  |4 aut 
700 1 |a Ding, Yu  |e verfasserin  |4 aut 
700 1 |a Mitchell, Kenny  |e verfasserin  |4 aut 
700 1 |a Yang, Xubo  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g PP(2023) vom: 22. Feb.  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:PP  |g year:2023  |g day:22  |g month:02 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2023.3247101  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2023  |b 22  |c 02