|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM286360780 |
003 |
DE-627 |
005 |
20231225051529.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2019 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2018.2843329
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0954.xml
|
035 |
|
|
|a (DE-627)NLM286360780
|
035 |
|
|
|a (NLM)29993535
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Cao, Chunshui
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Feedback Convolutional Neural Network for Visual Localization and Segmentation
|
264 |
|
1 |
|c 2019
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 07.08.2019
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Feedback is a fundamental mechanism existing in the human visual system, but has not been explored deeply in designing computer vision algorithms. In this paper, we claim that feedback plays a critical role in understanding convolutional neural networks (CNNs), e.g., how a neuron in CNNs describes an object's pattern, and how a collection of neurons form comprehensive perception to an object. To model the feedback in CNNs, we propose a novel model named Feedback CNN and develop two new processing algorithms, i.e., neural pathway pruning and pattern recovering. We mathematically prove that the proposed method can reach local optimum. Note that Feedback CNN belongs to weakly supervised methods and can be trained only using category-level labels. But it possesses a powerful capability to accurately localize and segment category-specific objects. We conduct extensive visualization analysis, and the results reveal the close relationship between neurons and object parts in Feedback CNN. Finally, we evaluate the proposed Feedback CNN over the tasks of weakly supervised object localization and segmentation, and the experimental results on ImageNet and Pascal VOC show that our method remarkably outperforms the state-of-the-art ones
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Huang, Yongzhen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yang, Yi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Liang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Zilei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tan, Tieniu
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 41(2019), 7 vom: 03. Juli, Seite 1627-1640
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:41
|g year:2019
|g number:7
|g day:03
|g month:07
|g pages:1627-1640
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2018.2843329
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 41
|j 2019
|e 7
|b 03
|c 07
|h 1627-1640
|