ResMLP : Feedforward Networks for Image Classification With Data-Efficient Training

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in whi...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 4 vom: 12. Apr., Seite 5314-5321
1. Verfasser: Touvron, Hugo (VerfasserIn)
Weitere Verfasser: Bojanowski, Piotr, Caron, Mathilde, Cord, Matthieu, El-Nouby, Alaaeldin, Grave, Edouard, Izacard, Gautier, Joulin, Armand, Synnaeve, Gabriel, Verbeek, Jakob, Jegou, Herve
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM346119324
003 DE-627
005 20231226030648.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2022.3206148  |2 doi 
028 5 2 |a pubmed24n1153.xml 
035 |a (DE-627)NLM346119324 
035 |a (NLM)36094972 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Touvron, Hugo  |e verfasserin  |4 aut 
245 1 0 |a ResMLP  |b Feedforward Networks for Image Classification With Data-Efficient Training 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 10.04.2023 
500 |a Date Revised 10.04.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We also train ResMLP models in a self-supervised setup, to further remove priors from employing a labelled dataset. Finally, by adapting our model to machine translation we achieve surprisingly good results. We share pre-trained models and our code based on the Timm library 
650 4 |a Journal Article 
700 1 |a Bojanowski, Piotr  |e verfasserin  |4 aut 
700 1 |a Caron, Mathilde  |e verfasserin  |4 aut 
700 1 |a Cord, Matthieu  |e verfasserin  |4 aut 
700 1 |a El-Nouby, Alaaeldin  |e verfasserin  |4 aut 
700 1 |a Grave, Edouard  |e verfasserin  |4 aut 
700 1 |a Izacard, Gautier  |e verfasserin  |4 aut 
700 1 |a Joulin, Armand  |e verfasserin  |4 aut 
700 1 |a Synnaeve, Gabriel  |e verfasserin  |4 aut 
700 1 |a Verbeek, Jakob  |e verfasserin  |4 aut 
700 1 |a Jegou, Herve  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 45(2023), 4 vom: 12. Apr., Seite 5314-5321  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:45  |g year:2023  |g number:4  |g day:12  |g month:04  |g pages:5314-5321 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2022.3206148  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 45  |j 2023  |e 4  |b 12  |c 04  |h 5314-5321