C2FNet : A Coarse-to-Fine Network for Multi-View 3D Point Cloud Generation
Generation of a 3D model of an object from multiple views has a wide range of applications. Different parts of an object would be accurately captured by a particular view or a subset of views in the case of multiple views. In this paper, a novel coarse-to-fine network (C2FNet) is proposed for 3D poi...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 19., Seite 6707-6718 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2022
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Generation of a 3D model of an object from multiple views has a wide range of applications. Different parts of an object would be accurately captured by a particular view or a subset of views in the case of multiple views. In this paper, a novel coarse-to-fine network (C2FNet) is proposed for 3D point cloud generation from multiple views. C2FNet generates subsets of 3D points that are best captured by individual views with the support of other views in a coarse-to-fine way, and then fuses these subsets of 3D points to a whole point cloud. It consists of a coarse generation module where coarse point clouds are constructed from multiple views by exploring the cross-view spatial relations, and a fine generation module where the coarse point cloud features are refined under the guidance of global consistency in appearance and context. Extensive experiments on the benchmark datasets have demonstrated that the proposed method outperforms the state-of-the-art methods |
---|---|
Beschreibung: | Date Revised 31.10.2022 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2022.3203213 |