Personalized Audio-Driven 3D Facial Animation via Style-Content Disentanglement
We present a learning-based approach for generating 3D facial animations with the motion style of a specific subject from arbitrary audio inputs. The subject style is learned from a video clip (1-2 minutes) either downloaded from the Internet or captured through an ordinary camera. Traditional metho...
Description complète
Détails bibliographiques
Publié dans: | IEEE transactions on visualization and computer graphics. - 1996. - 30(2024), 3 vom: 19. März, Seite 1803-1820
|
Auteur principal: |
Chai, Yujin
(Auteur) |
Autres auteurs: |
Shao, Tianjia,
Weng, Yanlin,
Zhou, Kun |
Format: | Article en ligne
|
Langue: | English |
Publié: |
2024
|
Accès à la collection: | IEEE transactions on visualization and computer graphics
|
Sujets: | Journal Article
Research Support, U.S. Gov't, Non-P.H.S.
Research Support, Non-U.S. Gov't |