NeRF-In : Free-Form Inpainting for Pretrained NeRF With RGB-D Priors
Neural radiance field (NeRF) has emerged as a versatile scene representation. However, it is still unintuitive to edit a pretrained NeRF because the network parameters and the scene appearance are often not explicitly associated. In this article, we introduce the first framework that enables users t...
Publié dans: | IEEE computer graphics and applications. - 1991. - 44(2024), 2 vom: 24. März, Seite 100-109 |
---|---|
Auteur principal: | |
Autres auteurs: | , |
Format: | Article en ligne |
Langue: | English |
Publié: |
2024
|
Accès à la collection: | IEEE computer graphics and applications |
Sujets: | Journal Article |
Résumé: | Neural radiance field (NeRF) has emerged as a versatile scene representation. However, it is still unintuitive to edit a pretrained NeRF because the network parameters and the scene appearance are often not explicitly associated. In this article, we introduce the first framework that enables users to retouch undesired regions in a pretrained NeRF scene without accessing any training data and category-specific data prior. The user first draws a free-form mask to specify a region containing the unwanted objects over an arbitrary rendered view from the pretrained NeRF. Our framework transfers the user-drawn mask to other rendered views and estimates guiding color and depth images within transferred masked regions. Next, we formulate an optimization problem that jointly inpaints the image content in all masked regions by updating NeRF's parameters. We demonstrate our framework on diverse scenes and show it obtained visually plausible and structurally consistent results using less user manual efforts |
---|---|
Description: | Date Revised 25.03.2024 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1558-1756 |
DOI: | 10.1109/MCG.2023.3336224 |