Fast Non-Rigid Radiance Fields from Monocularized Data

The reconstruction and novel view synthesis of dynamic scenes recently gained increased attention. As reconstruction from large-scale multi-view data involves immense memory and computational requirements, recent benchmark datasets provide collections of single monocular views per timestamp sampled...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2024) vom: 20. Feb.
1. Verfasser: Kappel, Moritz (VerfasserIn)
Weitere Verfasser: Golyanik, Vladislav, Castillo, Susana, Theobalt, Christian, Magnor, Marcus
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM368670163
003 DE-627
005 20240222092156.0
007 cr uuu---uuuuu
008 240222s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2024.3367431  |2 doi 
028 5 2 |a pubmed24n1301.xml 
035 |a (DE-627)NLM368670163 
035 |a (NLM)38376960 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Kappel, Moritz  |e verfasserin  |4 aut 
245 1 0 |a Fast Non-Rigid Radiance Fields from Monocularized Data 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 21.02.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a The reconstruction and novel view synthesis of dynamic scenes recently gained increased attention. As reconstruction from large-scale multi-view data involves immense memory and computational requirements, recent benchmark datasets provide collections of single monocular views per timestamp sampled from multiple (virtual) cameras. We refer to this form of inputs as monocularized data. Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is often limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360° inward-facing novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for accelerated training and inference; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field. In addition to existing synthetic monocularized data, we systematically analyze the performance on real-world inward-facing scenes using a newly recorded challenging dataset sampled from a synchronized large-scale multi-view rig. In both cases, our method is significantly faster than previous methods, converging in less than 7 minutes and achieving real-time framerates at 1K resolution, while obtaining a higher visual accuracy for generated novel views. Our code and dataset are available online: https://github.com/MoritzKappel/MoNeRF 
650 4 |a Journal Article 
700 1 |a Golyanik, Vladislav  |e verfasserin  |4 aut 
700 1 |a Castillo, Susana  |e verfasserin  |4 aut 
700 1 |a Theobalt, Christian  |e verfasserin  |4 aut 
700 1 |a Magnor, Marcus  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g PP(2024) vom: 20. Feb.  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:20  |g month:02 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2024.3367431  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 20  |c 02