Learning Photometric Feature Transform for Free-Form Object Scan

We propose a novel framework to automatically learn to aggregate and transform photometric measurements from multiple unstructured views into spatially distinctive and view-invariant low-level features, which are subsequently fed to a multi-view stereo pipeline to enhance 3D reconstruction. The illu...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 31(2025), 9 vom: 22. Aug., Seite 6398-6409
1. Verfasser: Feng, Xiang (VerfasserIn)
Weitere Verfasser: Kang, Kaizhang, Pei, Fan, Ding, Huakeng, You, Jinjiang, Tan, Ping, Zhou, Kun, Wu, Hongzhi
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2025
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:We propose a novel framework to automatically learn to aggregate and transform photometric measurements from multiple unstructured views into spatially distinctive and view-invariant low-level features, which are subsequently fed to a multi-view stereo pipeline to enhance 3D reconstruction. The illumination conditions during acquisition and the feature transform are jointly trained on a large amount of synthetic data. We further build a system to reconstruct both the geometry and anisotropic reflectance of a variety of challenging objects from hand-held scans. The effectiveness of the system is demonstrated with a lightweight prototype, consisting of a camera and an array of LEDs, as well as an off-the-shelf tablet. Our results are validated against reconstructions from a professional 3D scanner and photographs, and compare favorably with state-of-the-art techniques
Beschreibung:Date Revised 31.07.2025
published: Print
Citation Status PubMed-not-MEDLINE
ISSN:1941-0506
DOI:10.1109/TVCG.2024.3515478