Personalized Audio-Driven 3D Facial Animation via Style-Content Disentanglement
We present a learning-based approach for generating 3D facial animations with the motion style of a specific subject from arbitrary audio inputs. The subject style is learned from a video clip (1-2 minutes) either downloaded from the Internet or captured through an ordinary camera. Traditional metho...
Ausführliche Beschreibung
Bibliographische Detailangaben
Veröffentlicht in: | IEEE transactions on visualization and computer graphics. - 1996. - 30(2024), 3 vom: 19. März, Seite 1803-1820
|
1. Verfasser: |
Chai, Yujin
(VerfasserIn) |
Weitere Verfasser: |
Shao, Tianjia,
Weng, Yanlin,
Zhou, Kun |
Format: | Online-Aufsatz
|
Sprache: | English |
Veröffentlicht: |
2024
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on visualization and computer graphics
|
Schlagworte: | Journal Article
Research Support, U.S. Gov't, Non-P.H.S.
Research Support, Non-U.S. Gov't |