GaussianHead : High-fidelity Head Avatars with Learnable Gaussian Derivation

Creating lifelike 3D head avatars and generating compelling animations for diverse subjects remain challenging in computer vision. This paper presents GaussianHead, which models the active head based on anisotropic 3D Gaussians. Our method integrates a motion deformation field and a single-resolutio...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on visualization and computer graphics. - 1996. - PP(2025) vom: 17. Apr.
Auteur principal: Wang, Jie (Auteur)
Autres auteurs: Xie, Jiu-Cheng, Li, Xianyan, Xu, Feng, Pun, Chi-Man, Gao, Hao
Format: Article en ligne
Langue:English
Publié: 2025
Accès à la collection:IEEE transactions on visualization and computer graphics
Sujets:Journal Article
Description
Résumé:Creating lifelike 3D head avatars and generating compelling animations for diverse subjects remain challenging in computer vision. This paper presents GaussianHead, which models the active head based on anisotropic 3D Gaussians. Our method integrates a motion deformation field and a single-resolution tri-plane to capture the head's intricate dynamics and detailed texture. Notably, we introduce a customized derivation scheme for each 3D Gaussian, facilitating the generation of multiple "doppelgangers" through learnable parameters for precise position transformation. This approach enables efficient representation of diverse Gaussian attributes and ensures their precision. Additionally, we propose an inherited derivation strategy for newly added Gaussians to expedite training. Extensive experiments demonstrate GaussianHead's efficacy, achieving high-fidelity visual results with a remarkably compact model size ($\approx 12$ MB). Our method outperforms state-of-the-art alternatives in tasks such as reconstruction, cross-identity reenactment, and novel view synthesis. The source code is available at: https://github.com/chiehwangs/gaussian-head
Description:Date Revised 18.04.2025
published: Print-Electronic
Citation Status Publisher
ISSN:1941-0506
DOI:10.1109/TVCG.2025.3561794