|
|
|
|
LEADER |
01000caa a22002652c 4500 |
001 |
NLM31428799X |
003 |
DE-627 |
005 |
20250227212352.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2019 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1016/j.patrec.2019.08.004
|2 doi
|
028 |
5 |
2 |
|a pubmed25n1047.xml
|
035 |
|
|
|a (DE-627)NLM31428799X
|
035 |
|
|
|a (NLM)32855578
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Siddiqua, Ayesha
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Semantics-enhanced supervised deep autoencoder for depth image-based 3D model retrieval
|
264 |
|
1 |
|c 2019
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 29.03.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Increased accuracy and affordability of depth sensors such as Kinect has created a great depth-data source for various 3D oriented applications. Specifically, 3D model retrieval is attracting attention in the field of computer vision and pattern recognition due to its numerous applications. A cross-domain retrieval approach such as depth image based 3D model retrieval has the challenges of occlusion, noise and view variability present in both query and training data. In this paper, we propose a new supervised deep autoencoder approach followed by semantic modeling to retrieve 3D shapes based on depth images. The key novelty is the two-fold feature abstraction to cope with the incompleteness and ambiguity present in the depth images. First, we develop a supervised autoencoder to extract robust features from both real depth images and synthetic ones rendered from 3D models, which are intended to balance reconstruction and classification capabilities of mix-domain data. Then semantic modeling of the supervised autoencoder features offers the next level of abstraction to cope with the incompleteness and ambiguity of the depth data. It is interesting that unlike any other pairwise model structures, we argue that cross-domain retrieval is still possible using only one single deep network trained on real and synthetic data. The experimental results on the NYUD2 and ModelNet10 datasets demonstrate that the proposed supervised method outperforms the recent approaches for cross-modal 3D model retrieval
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a 3D model retrieval
|
650 |
|
4 |
|a cross-modal retrieval
|
650 |
|
4 |
|a deep autoencoder
|
650 |
|
4 |
|a shape matching
|
700 |
1 |
|
|a Fan, Guoliang
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t Pattern recognition letters
|d 1998
|g 125(2019) vom: 01. Juli, Seite 806-812
|w (DE-627)NLM098154265
|x 0167-8655
|7 nnas
|
773 |
1 |
8 |
|g volume:125
|g year:2019
|g day:01
|g month:07
|g pages:806-812
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1016/j.patrec.2019.08.004
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 125
|j 2019
|b 01
|c 07
|h 806-812
|