Personalized Audio-Driven 3D Facial Animation via Style-Content Disentanglement
We present a learning-based approach for generating 3D facial animations with the motion style of a specific subject from arbitrary audio inputs. The subject style is learned from a video clip (1-2 minutes) either downloaded from the Internet or captured through an ordinary camera. Traditional metho...
Veröffentlicht in: | IEEE transactions on visualization and computer graphics. - 1996. - 30(2024), 3 vom: 20. Jan., Seite 1803-1820 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2024
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on visualization and computer graphics |
Schlagworte: | Journal Article |
Online verfügbar |
Volltext |