|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM166292311 |
003 |
DE-627 |
005 |
20231223110508.0 |
007 |
tu |
008 |
231223s2006 xx ||||| 00| ||eng c |
028 |
5 |
2 |
|a pubmed24n0554.xml
|
035 |
|
|
|a (DE-627)NLM166292311
|
035 |
|
|
|a (NLM)17073374
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Deng, Zhigang
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Expressive facial animation synthesis by learning speech coarticulation and expression spaces
|
264 |
|
1 |
|c 2006
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ohne Hilfsmittel zu benutzen
|b n
|2 rdamedia
|
338 |
|
|
|a Band
|b nc
|2 rdacarrier
|
500 |
|
|
|a Date Completed 28.11.2006
|
500 |
|
|
|a Date Revised 10.12.2019
|
500 |
|
|
|a published: Print
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation
|
650 |
|
4 |
|a Evaluation Study
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
650 |
|
4 |
|a Research Support, U.S. Gov't, Non-P.H.S.
|
700 |
1 |
|
|a Neumann, Ulrich
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lewis, J P
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Kim, Tae-Yong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Bulut, Murtaza
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Narayanan, Shrikanth
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on visualization and computer graphics
|d 1996
|g 12(2006), 6 vom: 15. Nov., Seite 1523-34
|w (DE-627)NLM098269445
|x 1941-0506
|7 nnns
|
773 |
1 |
8 |
|g volume:12
|g year:2006
|g number:6
|g day:15
|g month:11
|g pages:1523-34
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 12
|j 2006
|e 6
|b 15
|c 11
|h 1523-34
|