Detecting 3D Points of Interest Using Multiple Features and Stacked Auto-encoder

Considering the fact that points of interest on 3D shapes can be discriminated from a geometric perspective, it is reasonable to map the geometric signature of a point $p$p to a probability value encoding to what degree $p$p is a point of interest, especially for a specific class of 3D shapes. Based...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 25(2019), 8 vom: 18. Aug., Seite 2583-2596
1. Verfasser: Shu, Zhenyu (VerfasserIn)
Weitere Verfasser: Xin, Shiqing, Xu, Xin, Liu, Ligang, Kavan, Ladislav
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Considering the fact that points of interest on 3D shapes can be discriminated from a geometric perspective, it is reasonable to map the geometric signature of a point $p$p to a probability value encoding to what degree $p$p is a point of interest, especially for a specific class of 3D shapes. Based on the observation, we propose a three-phase algorithm for learning and predicting points of interest on 3D shapes by using multiple feature descriptors. Our algorithm requires two separate deep neural networks (stacked auto-encoders) to accomplish the task. During the first phase, we predict the membership of the given 3D shape according to a set of geometric descriptors using a deep neural network. After that, we train the other deep neural network to predict a probability distribution defined on the surface representing the possibility of a point being a point of interest. Finally, we use a manifold clustering technique to extract a set of points of interest as the output. Experimental results show superior detection performance of the proposed method over the previous state-of-the-art approaches
Beschreibung:Date Revised 23.07.2019
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0506
DOI:10.1109/TVCG.2018.2848628