Point Cloud Completion Via Skeleton-Detail Transformer

Point cloud shape completion plays a central role in diverse 3D vision and robotics applications. Early methods used to generate global shapes without local detail refinement. Current methods tend to leverage local features to preserve the observed geometric details. However, they usually adopt the...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on visualization and computer graphics. - 1996. - 29(2023), 10 vom: 23. Okt., Seite 4229-4242
Auteur principal: Zhang, Wenxiao (Auteur)
Autres auteurs: Zhou, Huajian, Dong, Zhen, Liu, Jun, Yan, Qingan, Xiao, Chunxia
Format: Article en ligne
Langue:English
Publié: 2023
Accès à la collection:IEEE transactions on visualization and computer graphics
Sujets:Journal Article
Description
Résumé:Point cloud shape completion plays a central role in diverse 3D vision and robotics applications. Early methods used to generate global shapes without local detail refinement. Current methods tend to leverage local features to preserve the observed geometric details. However, they usually adopt the convolutional architecture over the incomplete point cloud to extract local features to restore the diverse information of both latent shape skeleton and geometric details, where long-distance correlation among the skeleton and details is ignored. In this work, we present a coarse-to-fine completion framework, which makes full use of both neighboring and long-distance region cues for point cloud completion. Our network leverages a Skeleton-Detail Transformer, which contains cross-attention and self-attention layers, to fully explore the correlation from local patterns to global shape and utilize it to enhance the overall skeleton. Also, we propose a selective attention mechanism to save memory usage in the attention process without significantly affecting performance. We conduct extensive experiments on the ShapeNet dataset and real-scanned datasets. Qualitative and quantitative evaluations demonstrate that our proposed network outperforms current state-of-the-art methods
Description:Date Revised 04.09.2023
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0506
DOI:10.1109/TVCG.2022.3185247