Learning Photometric Feature Transform for Free-Form Object Scan
We propose a novel framework to automatically learn to aggregate and transform photometric measurements from multiple unstructured views into spatially distinctive and view-invariant low-level features, which are subsequently fed to a multi-view stereo pipeline to enhance 3D reconstruction. The illu...
| Publié dans: | IEEE transactions on visualization and computer graphics. - 1996. - 31(2025), 9 vom: 22. Aug., Seite 6398-6409 |
|---|---|
| Auteur principal: | |
| Autres auteurs: | , , , , , , |
| Format: | Article en ligne |
| Langue: | English |
| Publié: |
2025
|
| Accès à la collection: | IEEE transactions on visualization and computer graphics |
| Sujets: | Journal Article |
| Résumé: | We propose a novel framework to automatically learn to aggregate and transform photometric measurements from multiple unstructured views into spatially distinctive and view-invariant low-level features, which are subsequently fed to a multi-view stereo pipeline to enhance 3D reconstruction. The illumination conditions during acquisition and the feature transform are jointly trained on a large amount of synthetic data. We further build a system to reconstruct both the geometry and anisotropic reflectance of a variety of challenging objects from hand-held scans. The effectiveness of the system is demonstrated with a lightweight prototype, consisting of a camera and an array of LEDs, as well as an off-the-shelf tablet. Our results are validated against reconstructions from a professional 3D scanner and photographs, and compare favorably with state-of-the-art techniques |
|---|---|
| Description: | Date Revised 31.07.2025 published: Print Citation Status PubMed-not-MEDLINE |
| ISSN: | 1941-0506 |
| DOI: | 10.1109/TVCG.2024.3515478 |