GaussianHead : High-fidelity Head Avatars with Learnable Gaussian Derivation

Creating lifelike 3D head avatars and generating compelling animations for diverse subjects remain challenging in computer vision. This paper presents GaussianHead, which models the active head based on anisotropic 3D Gaussians. Our method integrates a motion deformation field and a single-resolutio...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2025) vom: 17. Apr.
1. Verfasser: Wang, Jie (VerfasserIn)
Weitere Verfasser: Xie, Jiu-Cheng, Li, Xianyan, Xu, Feng, Pun, Chi-Man, Gao, Hao
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2025
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Creating lifelike 3D head avatars and generating compelling animations for diverse subjects remain challenging in computer vision. This paper presents GaussianHead, which models the active head based on anisotropic 3D Gaussians. Our method integrates a motion deformation field and a single-resolution tri-plane to capture the head's intricate dynamics and detailed texture. Notably, we introduce a customized derivation scheme for each 3D Gaussian, facilitating the generation of multiple "doppelgangers" through learnable parameters for precise position transformation. This approach enables efficient representation of diverse Gaussian attributes and ensures their precision. Additionally, we propose an inherited derivation strategy for newly added Gaussians to expedite training. Extensive experiments demonstrate GaussianHead's efficacy, achieving high-fidelity visual results with a remarkably compact model size ($\approx 12$ MB). Our method outperforms state-of-the-art alternatives in tasks such as reconstruction, cross-identity reenactment, and novel view synthesis. The source code is available at: https://github.com/chiehwangs/gaussian-head
Beschreibung:Date Revised 18.04.2025
published: Print-Electronic
Citation Status Publisher
ISSN:1941-0506
DOI:10.1109/TVCG.2025.3561794