Hierarchical Shape-Consistent Transformer for Unsupervised Point Cloud Shape Correspondence

Point cloud shape correspondence aims at accurately mapping one point cloud to another point cloud with various 3D shapes. Since point clouds are usually sparse, disordered, irregular, and with diverse shapes, it is challenging to learn consistent point cloud representations and achieve the accurate...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 32(2023) vom: 01., Seite 2734-2748
Auteur principal: He, Jianfeng (Auteur)
Autres auteurs: Deng, Jiacheng, Zhang, Tianzhu, Zhang, Zhe, Zhang, Yongdong
Format: Article en ligne
Langue:English
Publié: 2023
Accès à la collection:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Sujets:Journal Article
Description
Résumé:Point cloud shape correspondence aims at accurately mapping one point cloud to another point cloud with various 3D shapes. Since point clouds are usually sparse, disordered, irregular, and with diverse shapes, it is challenging to learn consistent point cloud representations and achieve the accurate matching of different point cloud shapes. To address the above issues, we propose a Hierarchical Shape-consistent TRansformer for unsupervised point cloud shape correspondence (HSTR), including a multi-receptive-field point representation encoder and a shape-consistent constrained module in a unified architecture. The proposed HSTR enjoys several merits. In the multi-receptive-field point representation encoder, we set progressively larger receptive fields in different blocks to simultaneously consider the local structure and the long-range context. In the shape-consistent constrained module, we design two novel shape selective whitening losses, which can complement each other to achieve suppression of features sensitive to shape change. Extensive experimental results on four standard benchmarks demonstrate the superiority and generalization ability of our approach to existing methods at the similar model scale, and our method achieves the new state-of-the-art results
Description:Date Completed 21.05.2023
Date Revised 21.05.2023
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2023.3272821