Automatic Spatially Varying Illumination Recovery of Indoor Scenes Based on a Single RGB-D Image

We propose an automatic framework to recover the illumination of indoor scenes based on a single RGB-D image. Unlike previous works, our method can recover spatially varying illumination without using any lighting capturing devices or HDR information. The recovered illumination can produce realistic...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 26(2020), 4 vom: 29. Apr., Seite 1672-1685
1. Verfasser: Xing, Guanyu (VerfasserIn)
Weitere Verfasser: Liu, Yanli, Ling, Haibin, Granier, Xavier, Zhang, Yanci
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:We propose an automatic framework to recover the illumination of indoor scenes based on a single RGB-D image. Unlike previous works, our method can recover spatially varying illumination without using any lighting capturing devices or HDR information. The recovered illumination can produce realistic rendering results. To model the geometry of the visible and invisible parts of scenes corresponding to the input RGB-D image, we assume that all objects shown in the image are located in a box with six faces and build a planar-based geometry model based on the input depth map. We then present a confidence-scoring based strategy to separate the light sources from the highlight areas. The positions of light sources both in and out of the camera's view are calculated based on the classification result and the recovered geometry model. Finally, an iterative procedure is proposed to calculate the colors of light sources and the materials in the scene. In addition, a data-driven method is used to set constraints on the light source intensities. Using the estimated light sources and geometry model, environment maps at different points in the scene are generated that can model the spatial variance of illumination. The experimental results demonstrate the validity and flexibility of our approach
Beschreibung:Date Revised 09.03.2020
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0506
DOI:10.1109/TVCG.2018.2876541