|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM306791056 |
003 |
DE-627 |
005 |
20231225124139.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2021 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2020.2974833
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1022.xml
|
035 |
|
|
|a (DE-627)NLM306791056
|
035 |
|
|
|a (NLM)32086198
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Wang, Qilong
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Deep CNNs Meet Global Covariance Pooling
|b Better Representation and Generalization
|
264 |
|
1 |
|c 2021
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 16.07.2021
|
500 |
|
|
|a Date Revised 16.07.2021
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a Compared with global average pooling in existing deep convolutional neural networks (CNNs), global covariance pooling can capture richer statistics of deep features, having potential for improving representation and generalization abilities of deep CNNs. However, integration of global covariance pooling into deep CNNs brings two challenges: (1) robust covariance estimation given deep features of high dimension and small sample size; (2) appropriate usage of geometry of covariances. To address these challenges, we propose a global Matrix Power Normalized COVariance (MPN-COV) Pooling. Our MPN-COV conforms to a robust covariance estimator, very suitable for scenario of high dimension and small sample size. It can also be regarded as Power-Euclidean metric between covariances, effectively exploiting their geometry. Furthermore, a global Gaussian embedding network is proposed to incorporate first-order statistics into MPN-COV. For fast training of MPN-COV networks, we implement an iterative matrix square root normalization, avoiding GPU unfriendly eigen-decomposition inherent in MPN-COV. Additionally, progressive 1×1 convolutions and group convolution are introduced to compress covariance representations. The proposed methods are highly modular, readily plugged into existing deep CNNs. Extensive experiments are conducted on large-scale object classification, scene categorization, fine-grained visual recognition and texture classification, showing our methods outperform the counterparts and obtain state-of-the-art performance
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Xie, Jiangtao
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zuo, Wangmeng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Lei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Peihua
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 43(2021), 8 vom: 07. Aug., Seite 2582-2597
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:43
|g year:2021
|g number:8
|g day:07
|g month:08
|g pages:2582-2597
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2020.2974833
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 43
|j 2021
|e 8
|b 07
|c 08
|h 2582-2597
|