Automatic Spatially Varying Illumination Recovery of Indoor Scenes Based on a Single RGB-D Image

We propose an automatic framework to recover the illumination of indoor scenes based on a single RGB-D image. Unlike previous works, our method can recover spatially varying illumination without using any lighting capturing devices or HDR information. The recovered illumination can produce realistic...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 26(2020), 4 vom: 29. Apr., Seite 1672-1685
1. Verfasser: Xing, Guanyu (VerfasserIn)
Weitere Verfasser: Liu, Yanli, Ling, Haibin, Granier, Xavier, Zhang, Yanci
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM290053528
003 DE-627
005 20231225063908.0
007 cr uuu---uuuuu
008 231225s2020 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2018.2876541  |2 doi 
028 5 2 |a pubmed24n0966.xml 
035 |a (DE-627)NLM290053528 
035 |a (NLM)30371374 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Xing, Guanyu  |e verfasserin  |4 aut 
245 1 0 |a Automatic Spatially Varying Illumination Recovery of Indoor Scenes Based on a Single RGB-D Image 
264 1 |c 2020 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 09.03.2020 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a We propose an automatic framework to recover the illumination of indoor scenes based on a single RGB-D image. Unlike previous works, our method can recover spatially varying illumination without using any lighting capturing devices or HDR information. The recovered illumination can produce realistic rendering results. To model the geometry of the visible and invisible parts of scenes corresponding to the input RGB-D image, we assume that all objects shown in the image are located in a box with six faces and build a planar-based geometry model based on the input depth map. We then present a confidence-scoring based strategy to separate the light sources from the highlight areas. The positions of light sources both in and out of the camera's view are calculated based on the classification result and the recovered geometry model. Finally, an iterative procedure is proposed to calculate the colors of light sources and the materials in the scene. In addition, a data-driven method is used to set constraints on the light source intensities. Using the estimated light sources and geometry model, environment maps at different points in the scene are generated that can model the spatial variance of illumination. The experimental results demonstrate the validity and flexibility of our approach 
650 4 |a Journal Article 
700 1 |a Liu, Yanli  |e verfasserin  |4 aut 
700 1 |a Ling, Haibin  |e verfasserin  |4 aut 
700 1 |a Granier, Xavier  |e verfasserin  |4 aut 
700 1 |a Zhang, Yanci  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 26(2020), 4 vom: 29. Apr., Seite 1672-1685  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:26  |g year:2020  |g number:4  |g day:29  |g month:04  |g pages:1672-1685 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2018.2876541  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 26  |j 2020  |e 4  |b 29  |c 04  |h 1672-1685