|
|
|
|
LEADER |
01000caa a22002652c 4500 |
001 |
NLM318044919 |
003 |
DE-627 |
005 |
20250228104730.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2021 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2020.3039378
|2 doi
|
028 |
5 |
2 |
|a pubmed25n1059.xml
|
035 |
|
|
|a (DE-627)NLM318044919
|
035 |
|
|
|a (NLM)33237859
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Sun, Kai
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a DRCNN
|b Dynamic Routing Convolutional Neural Network for Multi-View 3D Object Recognition
|
264 |
|
1 |
|c 2021
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 08.12.2020
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a 3D object recognition is one of the most important tasks in 3D data processing, and has been extensively studied recently. Researchers have proposed various 3D recognition methods based on deep learning, among which a class of view-based approaches is a typical one. However, in the view-based methods, the commonly used view pooling layer to fuse multi-view features causes a loss of visual information. To alleviate this problem, in this paper, we construct a novel layer called Dynamic Routing Layer (DRL) by modifying the dynamic routing algorithm of capsule network, to more effectively fuse the features of each view. Concretely, in DRL, we use rearrangement and affine transformation to convert features, then leverage the modified dynamic routing algorithm to adaptively choose the converted features, instead of ignoring all but the most active feature in view pooling layer. We also illustrate that the view pooling layer is a special case of our DRL. In addition, based on DRL, we further present a Dynamic Routing Convolutional Neural Network (DRCNN) for multi-view 3D object recognition. Our experiments on three 3D benchmark datasets show that our proposed DRCNN outperforms many state-of-the-arts, which demonstrates the efficacy of our method
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Zhang, Jiangshe
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Junmin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yu, Ruixuan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Song, Zengjie
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 30(2021) vom: 25., Seite 868-877
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnas
|
773 |
1 |
8 |
|g volume:30
|g year:2021
|g day:25
|g pages:868-877
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2020.3039378
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 30
|j 2021
|b 25
|h 868-877
|