Learning to Reconstruct and Understand Indoor Scenes From Sparse Views

This paper proposes a new method for simultaneous 3D reconstruction and semantic segmentation for indoor scenes. Unlike existing methods that require recording a video using a color camera and/or a depth camera, our method only needs a small number of (e.g., 3~5) color images from uncalibrated spars...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2020) vom: 14. Apr.
1. Verfasser: Yang, Jingyu (VerfasserIn)
Weitere Verfasser: Xu, Ji, Li, Kun, Lai, Yu-Kun, Yue, Huanjing, Lu, Jianzhi, Wu, Hao, Liu, Yebin
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM308907043
003 DE-627
005 20240229162748.0
007 cr uuu---uuuuu
008 231225s2020 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2020.2986712  |2 doi 
028 5 2 |a pubmed24n1308.xml 
035 |a (DE-627)NLM308907043 
035 |a (NLM)32305917 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Yang, Jingyu  |e verfasserin  |4 aut 
245 1 0 |a Learning to Reconstruct and Understand Indoor Scenes From Sparse Views 
264 1 |c 2020 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 27.02.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a This paper proposes a new method for simultaneous 3D reconstruction and semantic segmentation for indoor scenes. Unlike existing methods that require recording a video using a color camera and/or a depth camera, our method only needs a small number of (e.g., 3~5) color images from uncalibrated sparse views, which significantly simplifies data acquisition and broadens applicable scenarios. To achieve promising 3D reconstruction from sparse views with limited overlap, our method first recovers the depth map and semantic information for each view, and then fuses the depth maps into a 3D scene. To this end, we design an iterative deep architecture, named IterNet, to estimate the depth map and semantic segmentation alternately. To obtain accurate alignment between views with limited overlap, we further propose a joint global and local registration method to reconstruct a 3D scene with semantic information. We also make available a new indoor synthetic dataset, containing photorealistic high-resolution RGB images, accurate depth maps and pixel-level semantic labels for thousands of complex layouts. Experimental results on public datasets and our dataset demonstrate that our method achieves more accurate depth estimation, smaller semantic segmentation errors, and better 3D reconstruction results over state-of-the-art methods 
650 4 |a Journal Article 
700 1 |a Xu, Ji  |e verfasserin  |4 aut 
700 1 |a Li, Kun  |e verfasserin  |4 aut 
700 1 |a Lai, Yu-Kun  |e verfasserin  |4 aut 
700 1 |a Yue, Huanjing  |e verfasserin  |4 aut 
700 1 |a Lu, Jianzhi  |e verfasserin  |4 aut 
700 1 |a Wu, Hao  |e verfasserin  |4 aut 
700 1 |a Liu, Yebin  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g (2020) vom: 14. Apr.  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g year:2020  |g day:14  |g month:04 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2020.2986712  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |j 2020  |b 14  |c 04