What Makes for Good Tokenizers in Vision Transformer?

The architecture of transformers, which recently witness booming applications in vision tasks, has pivoted against the widespread convolutional paradigm. Relying on the tokenization process that splits inputs into multiple tokens, transformers are capable of extracting their pairwise relationships u...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 11 vom: 06. Nov., Seite 13011-13023
1. Verfasser: Qian, Shengju (VerfasserIn)
Weitere Verfasser: Zhu, Yi, Li, Wenbo, Li, Mu, Jia, Jiaya
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM355202808
003 DE-627
005 20250509103751.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2022.3231442  |2 doi 
028 5 2 |a pubmed25n1365.xml 
035 |a (DE-627)NLM355202808 
035 |a (NLM)37015534 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Qian, Shengju  |e verfasserin  |4 aut 
245 1 0 |a What Makes for Good Tokenizers in Vision Transformer? 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 04.04.2025 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a The architecture of transformers, which recently witness booming applications in vision tasks, has pivoted against the widespread convolutional paradigm. Relying on the tokenization process that splits inputs into multiple tokens, transformers are capable of extracting their pairwise relationships using self-attention. While being the stemming building block of transformers, what makes for a good tokenizer has not been well understood in computer vision. In this work, we investigate this uncharted problem from an information trade-off perspective. In addition to unifying and understanding existing structural modifications, our derivation leads to better design strategies for vision tokenizers. The proposed Modulation across Tokens (MoTo) incorporates inter-token modeling capability through normalization. Furthermore, a regularization objective TokenProp is embraced in the standard training regime. Through extensive experiments on various transformer architectures, we observe both improved performance and intriguing properties of these two plug-and-play designs with negligible computational overhead. These observations further indicate the importance of the commonly-omitted designs of tokenizers in vision transformer 
650 4 |a Journal Article 
700 1 |a Zhu, Yi  |e verfasserin  |4 aut 
700 1 |a Li, Wenbo  |e verfasserin  |4 aut 
700 1 |a Li, Mu  |e verfasserin  |4 aut 
700 1 |a Jia, Jiaya  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 45(2023), 11 vom: 06. Nov., Seite 13011-13023  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnas 
773 1 8 |g volume:45  |g year:2023  |g number:11  |g day:06  |g month:11  |g pages:13011-13023 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2022.3231442  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 45  |j 2023  |e 11  |b 06  |c 11  |h 13011-13023