Hierarchical Shape-Consistent Transformer for Unsupervised Point Cloud Shape Correspondence
Point cloud shape correspondence aims at accurately mapping one point cloud to another point cloud with various 3D shapes. Since point clouds are usually sparse, disordered, irregular, and with diverse shapes, it is challenging to learn consistent point cloud representations and achieve the accurate...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 32(2023) vom: 08., Seite 2734-2748 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2023
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Point cloud shape correspondence aims at accurately mapping one point cloud to another point cloud with various 3D shapes. Since point clouds are usually sparse, disordered, irregular, and with diverse shapes, it is challenging to learn consistent point cloud representations and achieve the accurate matching of different point cloud shapes. To address the above issues, we propose a Hierarchical Shape-consistent TRansformer for unsupervised point cloud shape correspondence (HSTR), including a multi-receptive-field point representation encoder and a shape-consistent constrained module in a unified architecture. The proposed HSTR enjoys several merits. In the multi-receptive-field point representation encoder, we set progressively larger receptive fields in different blocks to simultaneously consider the local structure and the long-range context. In the shape-consistent constrained module, we design two novel shape selective whitening losses, which can complement each other to achieve suppression of features sensitive to shape change. Extensive experimental results on four standard benchmarks demonstrate the superiority and generalization ability of our approach to existing methods at the similar model scale, and our method achieves the new state-of-the-art results |
---|---|
Beschreibung: | Date Completed 21.05.2023 Date Revised 21.05.2023 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2023.3272821 |