Learning View-Based Graph Convolutional Network for Multi-View 3D Shape Analysis

View-based approach that recognizes 3D shape through its projected 2D images has achieved state-of-the-art results for 3D shape recognition. The major challenges are how to aggregate multi-view features and deal with 3D shapes in arbitrary poses. We propose two versions of a novel view-based Graph C...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 6 vom: 25. Juni, Seite 7525-7541
1. Verfasser: Wei, Xin (VerfasserIn)
Weitere Verfasser: Yu, Ruixuan, Sun, Jian
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:View-based approach that recognizes 3D shape through its projected 2D images has achieved state-of-the-art results for 3D shape recognition. The major challenges are how to aggregate multi-view features and deal with 3D shapes in arbitrary poses. We propose two versions of a novel view-based Graph Convolutional Network, dubbed view-GCN and view-GCN++, to recognize 3D shape based on graph representation of multiple views. We first construct view-graph with multiple views as graph nodes, then design two graph convolutional networks over the view-graph to hierarchically learn discriminative shape descriptor considering relations of multiple views. Specifically, view-GCN is a hierarchical network based on two pivotal operations, i.e., feature transform based on local positional and non-local graph convolution, and graph coarsening based on a selective view-sampling operation. To deal with rotation sensitivity, we further propose view-GCN++ with local attentional graph convolution operation and rotation robust view-sampling operation for graph coarsening. By these designs, view-GCN++ achieves invariance to transformations under the finite subgroup of rotation group SO(3). Extensive experiments on benchmark datasets (i.e., ModelNet40, ScanObjectNN, RGBD and ShapeNet Core55) show that view-GCN and view-GCN++ achieve state-of-the-art results for 3D shape classification and retrieval tasks under aligned and rotated settings
Beschreibung:Date Completed 07.05.2023
Date Revised 07.05.2023
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1939-3539
DOI:10.1109/TPAMI.2022.3221785