Planar Abstraction and Inverse Rendering of 3D Indoor Environments

Scanning and acquiring a 3D indoor environment suffers from complex occlusions and misalignment errors. The reconstruction obtained from an RGB-D scanner contains holes in geometry and ghosting in texture. These are easily noticeable and cannot be considered as visually compelling VR content without...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on visualization and computer graphics. - 1996. - 27(2021), 6 vom: 17. Juni, Seite 2992-3006
Auteur principal: Kim, Young Min (Auteur)
Autres auteurs: Ryu, Sangwoo, Kim, Ig-Jae
Format: Article en ligne
Langue:English
Publié: 2021
Accès à la collection:IEEE transactions on visualization and computer graphics
Sujets:Journal Article
Description
Résumé:Scanning and acquiring a 3D indoor environment suffers from complex occlusions and misalignment errors. The reconstruction obtained from an RGB-D scanner contains holes in geometry and ghosting in texture. These are easily noticeable and cannot be considered as visually compelling VR content without further processing. On the other hand, the well-known Manhattan World priors successfully recreate relatively simple structures. In this article, we would like to push the limit of planar representation in indoor environments. Given an initial 3D reconstruction captured by an RGB-D sensor, we use planes not only to represent the environment geometrically but also to solve an inverse rendering problem considering texture and light. The complex process of shape inference and intrinsic imaging is greatly simplified with the help of detected planes and yet produces a realistic 3D indoor environment. The generated content can adequately represent the spatial arrangements for various AR/VR applications and can be readily composited with virtual objects possessing plausible lighting and texture
Description:Date Revised 13.05.2021
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0506
DOI:10.1109/TVCG.2019.2960776