NeRF-Art : Text-Driven Neural Radiance Fields Stylization

As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially in simulating a text-guided style with both the appearance and the geometry altered simultaneously. I...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on visualization and computer graphics. - 1996. - 30(2024), 8 vom: 01. Aug., Seite 4983-4996
Auteur principal: Wang, Can (Auteur)
Autres auteurs: Jiang, Ruixiang, Chai, Menglei, He, Mingming, Chen, Dongdong, Liao, Jing
Format: Article en ligne
Langue:English
Publié: 2024
Accès à la collection:IEEE transactions on visualization and computer graphics
Sujets:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM35781049X
003 DE-627
005 20250304210640.0
007 cr uuu---uuuuu
008 231226s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2023.3283400  |2 doi 
028 5 2 |a pubmed25n1192.xml 
035 |a (DE-627)NLM35781049X 
035 |a (NLM)37279137 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Wang, Can  |e verfasserin  |4 aut 
245 1 0 |a NeRF-Art  |b Text-Driven Neural Radiance Fields Stylization 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 01.07.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially in simulating a text-guided style with both the appearance and the geometry altered simultaneously. In this paper, we present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt. Unlike previous approaches that either lack sufficient geometry deformations and texture details or require meshes to guide the stylization, our method can shift a 3D scene to the target style characterized by desired geometry and appearance variations without any mesh guidance. This is achieved by introducing a novel global-local contrastive learning strategy, combined with the directional constraint to simultaneously control both the trajectory and the strength of the target style. Moreover, we adopt a weight regularization method to effectively suppress cloudy artifacts and geometry noises which arise easily when the density field is transformed during geometry stylization. Through extensive experiments on various styles, we demonstrate that our method is effective and robust regarding both single-view stylization quality and cross-view consistency 
650 4 |a Journal Article 
700 1 |a Jiang, Ruixiang  |e verfasserin  |4 aut 
700 1 |a Chai, Menglei  |e verfasserin  |4 aut 
700 1 |a He, Mingming  |e verfasserin  |4 aut 
700 1 |a Chen, Dongdong  |e verfasserin  |4 aut 
700 1 |a Liao, Jing  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 30(2024), 8 vom: 01. Aug., Seite 4983-4996  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnas 
773 1 8 |g volume:30  |g year:2024  |g number:8  |g day:01  |g month:08  |g pages:4983-4996 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2023.3283400  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 30  |j 2024  |e 8  |b 01  |c 08  |h 4983-4996