Learning Photometric Feature Transform for Free-Form Object Scan

We propose a novel framework to automatically learn to aggregate and transform photometric measurements from multiple unstructured views into spatially distinctive and view-invariant low-level features, which are subsequently fed to a multi-view stereo pipeline to enhance 3D reconstruction. The illu...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on visualization and computer graphics. - 1996. - 31(2025), 9 vom: 22. Aug., Seite 6398-6409
Auteur principal: Feng, Xiang (Auteur)
Autres auteurs: Kang, Kaizhang, Pei, Fan, Ding, Huakeng, You, Jinjiang, Tan, Ping, Zhou, Kun, Wu, Hongzhi
Format: Article en ligne
Langue:English
Publié: 2025
Accès à la collection:IEEE transactions on visualization and computer graphics
Sujets:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM38506828X
003 DE-627
005 20250801232550.0
007 cr uuu---uuuuu
008 250508s2025 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2024.3515478  |2 doi 
028 5 2 |a pubmed25n1516.xml 
035 |a (DE-627)NLM38506828X 
035 |a (NLM)40030491 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Feng, Xiang  |e verfasserin  |4 aut 
245 1 0 |a Learning Photometric Feature Transform for Free-Form Object Scan 
264 1 |c 2025 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 31.07.2025 
500 |a published: Print 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a We propose a novel framework to automatically learn to aggregate and transform photometric measurements from multiple unstructured views into spatially distinctive and view-invariant low-level features, which are subsequently fed to a multi-view stereo pipeline to enhance 3D reconstruction. The illumination conditions during acquisition and the feature transform are jointly trained on a large amount of synthetic data. We further build a system to reconstruct both the geometry and anisotropic reflectance of a variety of challenging objects from hand-held scans. The effectiveness of the system is demonstrated with a lightweight prototype, consisting of a camera and an array of LEDs, as well as an off-the-shelf tablet. Our results are validated against reconstructions from a professional 3D scanner and photographs, and compare favorably with state-of-the-art techniques 
650 4 |a Journal Article 
700 1 |a Kang, Kaizhang  |e verfasserin  |4 aut 
700 1 |a Pei, Fan  |e verfasserin  |4 aut 
700 1 |a Ding, Huakeng  |e verfasserin  |4 aut 
700 1 |a You, Jinjiang  |e verfasserin  |4 aut 
700 1 |a Tan, Ping  |e verfasserin  |4 aut 
700 1 |a Zhou, Kun  |e verfasserin  |4 aut 
700 1 |a Wu, Hongzhi  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 31(2025), 9 vom: 22. Aug., Seite 6398-6409  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnas 
773 1 8 |g volume:31  |g year:2025  |g number:9  |g day:22  |g month:08  |g pages:6398-6409 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2024.3515478  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 31  |j 2025  |e 9  |b 22  |c 08  |h 6398-6409