Squeeze-and-Excitation Networks

The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated t...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 42(2020), 8 vom: 27. Aug., Seite 2011-2023
1. Verfasser: Hu, Jie (VerfasserIn)
Weitere Verfasser: Shen, Li, Albanie, Samuel, Sun, Gang, Wu, Enhua
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM296542547
003 DE-627
005 20231225090052.0
007 cr uuu---uuuuu
008 231225s2020 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2019.2913372  |2 doi 
028 5 2 |a pubmed24n0988.xml 
035 |a (DE-627)NLM296542547 
035 |a (NLM)31034408 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Hu, Jie  |e verfasserin  |4 aut 
245 1 0 |a Squeeze-and-Excitation Networks 
264 1 |c 2020 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 12.02.2021 
500 |a Date Revised 12.02.2021 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251 percent, surpassing the winning entry of 2016 by a relative improvement of  ∼ 25 percent. Models and code are available at https://github.com/hujie-frank/SENet 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Shen, Li  |e verfasserin  |4 aut 
700 1 |a Albanie, Samuel  |e verfasserin  |4 aut 
700 1 |a Sun, Gang  |e verfasserin  |4 aut 
700 1 |a Wu, Enhua  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 42(2020), 8 vom: 27. Aug., Seite 2011-2023  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:42  |g year:2020  |g number:8  |g day:27  |g month:08  |g pages:2011-2023 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2019.2913372  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 42  |j 2020  |e 8  |b 27  |c 08  |h 2011-2023