Pointfilter : Point Cloud Filtering via Encoder-Decoder Modeling

Point cloud filtering is a fundamental problem in geometry modeling and processing. Despite of significant advancement in recent years, the existing methods still suffer from two issues: 1) they are either designed without preserving sharp features or less robust in feature preservation; and 2) they...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 27(2021), 3 vom: 01. März, Seite 2015-2027
1. Verfasser: Zhang, Dongbo (VerfasserIn)
Weitere Verfasser: Lu, Xuequan, Qin, Hong, He, Ying
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Point cloud filtering is a fundamental problem in geometry modeling and processing. Despite of significant advancement in recent years, the existing methods still suffer from two issues: 1) they are either designed without preserving sharp features or less robust in feature preservation; and 2) they usually have many parameters and require tedious parameter tuning. In this article, we propose a novel deep learning approach that automatically and robustly filters point clouds by removing noise and preserving their sharp features. Our point-wise learning architecture consists of an encoder and a decoder. The encoder directly takes points (a point and its neighbors) as input, and learns a latent representation vector which goes through the decoder to relate the ground-truth position with a displacement vector. The trained neural network can automatically generate a set of clean points from a noisy input. Extensive experiments show that our approach outperforms the state-of-the-art deep learning techniques in terms of both visual quality and quantitative error metrics. The source code and dataset can be found at https://github.com/dongbo-BUAA-VR/Pointfilter
Beschreibung:Date Revised 01.02.2021
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0506
DOI:10.1109/TVCG.2020.3027069