StructNeRF : Neural Radiance Fields for Indoor Scenes With Structural Hints
Neural Radiance Fields (NeRF) achieve photo-realistic view synthesis with densely captured input images. However, the geometry of NeRF is extremely under-constrained given sparse views, resulting in significant degradation of novel view synthesis quality. Inspired by self-supervised depth estimation...
| Publié dans: | IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 12 vom: 15. Dez., Seite 15694-15705 |
|---|---|
| Auteur principal: | |
| Autres auteurs: | , , |
| Format: | Article en ligne |
| Langue: | English |
| Publié: |
2023
|
| Accès à la collection: | IEEE transactions on pattern analysis and machine intelligence |
| Sujets: | Journal Article |
| Résumé: | Neural Radiance Fields (NeRF) achieve photo-realistic view synthesis with densely captured input images. However, the geometry of NeRF is extremely under-constrained given sparse views, resulting in significant degradation of novel view synthesis quality. Inspired by self-supervised depth estimation methods, we propose StructNeRF, a solution to novel view synthesis for indoor scenes with sparse inputs. StructNeRF leverages the structural hints naturally embedded in multi-view inputs to handle the unconstrained geometry issue in NeRF. Specifically, it tackles the texture and non-texture regions respectively: a patch-based multi-view consistent photometric loss is proposed to constrain the geometry of textured regions; for non-textured ones, we explicitly restrict them to be 3D consistent planes. Through the dense self-supervised depth constraints, our method improves both the geometry and the view synthesis performance of NeRF without any additional training on external data. Extensive experiments on several real-world datasets demonstrate that StructNeRF shows superior or comparable performance compared to state-of-the-art methods (e.g. NeRF, DSNeRF, RegNeRF, Dense Depth Priors, MonoSDF, etc.) for indoor scenes with sparse inputs both quantitatively and qualitatively |
|---|---|
| Description: | Date Revised 07.11.2023 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
| ISSN: | 1939-3539 |
| DOI: | 10.1109/TPAMI.2023.3305295 |