VERAM : View-Enhanced Recurrent Attention Model for 3D Shape Classification

Multi-view deep neural network is perhaps the most successful approach in 3D shape classification. However, the fusion of multi-view features based on max or average pooling lacks a view selection mechanism, limiting its application in, e.g., multi-view active object recognition by a robot. This pap...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 25(2019), 12 vom: 20. Dez., Seite 3244-3257
1. Verfasser: Chen, Songle (VerfasserIn)
Weitere Verfasser: Zheng, Lintao, Zhang, Yan, Sun, Zhixin, Xu, Kai
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM287753557
003 DE-627
005 20231225054725.0
007 cr uuu---uuuuu
008 231225s2019 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2018.2866793  |2 doi 
028 5 2 |a pubmed24n0959.xml 
035 |a (DE-627)NLM287753557 
035 |a (NLM)30137010 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Chen, Songle  |e verfasserin  |4 aut 
245 1 0 |a VERAM  |b View-Enhanced Recurrent Attention Model for 3D Shape Classification 
264 1 |c 2019 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 11.03.2020 
500 |a Date Revised 11.03.2020 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Multi-view deep neural network is perhaps the most successful approach in 3D shape classification. However, the fusion of multi-view features based on max or average pooling lacks a view selection mechanism, limiting its application in, e.g., multi-view active object recognition by a robot. This paper presents VERAM, a view-enhanced recurrent attention model capable of actively selecting a sequence of views for highly accurate 3D shape classification. VERAM addresses an important issue commonly found in existing attention-based models, i.e., the unbalanced training of the subnetworks corresponding to next view estimation and shape classification. The classification subnetwork is easily overfitted while the view estimation one is usually poorly trained, leading to a suboptimal classification performance. This is surmounted by three essential view-enhancement strategies: 1) enhancing the information flow of gradient backpropagation for the view estimation subnetwork, 2) devising a highly informative reward function for the reinforcement training of view estimation and 3) formulating a novel loss function that explicitly circumvents view duplication. Taking grayscale image as input and AlexNet as CNN architecture, VERAM with 9 views achieves instance-level and class-level accuracy of 95.5 and 95.3 percent on ModelNet10, 93.7 and 92.1 percent on ModelNet40, both are the state-of-the-art performance under the same number of views 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Zheng, Lintao  |e verfasserin  |4 aut 
700 1 |a Zhang, Yan  |e verfasserin  |4 aut 
700 1 |a Sun, Zhixin  |e verfasserin  |4 aut 
700 1 |a Xu, Kai  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 25(2019), 12 vom: 20. Dez., Seite 3244-3257  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:25  |g year:2019  |g number:12  |g day:20  |g month:12  |g pages:3244-3257 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2018.2866793  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 25  |j 2019  |e 12  |b 20  |c 12  |h 3244-3257