A Survey on Vision Transformer

Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visu...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 1 vom: 01. Jan., Seite 87-110
1. Verfasser: Han, Kai (VerfasserIn)
Weitere Verfasser: Wang, Yunhe, Chen, Hanting, Chen, Xinghao, Guo, Jianyuan, Liu, Zhenhua, Tang, Yehui, Xiao, An, Xu, Chunjing, Xu, Yixing, Yang, Zhaohui, Zhang, Yiman, Tao, Dacheng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM337119449
003 DE-627
005 20231225233703.0
007 cr uuu---uuuuu
008 231225s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2022.3152247  |2 doi 
028 5 2 |a pubmed24n1123.xml 
035 |a (DE-627)NLM337119449 
035 |a (NLM)35180075 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Han, Kai  |e verfasserin  |4 aut 
245 1 2 |a A Survey on Vision Transformer 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 05.04.2023 
500 |a Date Revised 05.04.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent neural networks. Given its high performance and less need for vision-specific inductive bias, transformer is receiving more and more attention from the computer vision community. In this paper, we review these vision transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages. The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also include efficient transformer methods for pushing transformer into real device-based applications. Furthermore, we also take a brief look at the self-attention mechanism in computer vision, as it is the base component in transformer. Toward the end of this paper, we discuss the challenges and provide several further research directions for vision transformers 
650 4 |a Journal Article 
700 1 |a Wang, Yunhe  |e verfasserin  |4 aut 
700 1 |a Chen, Hanting  |e verfasserin  |4 aut 
700 1 |a Chen, Xinghao  |e verfasserin  |4 aut 
700 1 |a Guo, Jianyuan  |e verfasserin  |4 aut 
700 1 |a Liu, Zhenhua  |e verfasserin  |4 aut 
700 1 |a Tang, Yehui  |e verfasserin  |4 aut 
700 1 |a Xiao, An  |e verfasserin  |4 aut 
700 1 |a Xu, Chunjing  |e verfasserin  |4 aut 
700 1 |a Xu, Yixing  |e verfasserin  |4 aut 
700 1 |a Yang, Zhaohui  |e verfasserin  |4 aut 
700 1 |a Zhang, Yiman  |e verfasserin  |4 aut 
700 1 |a Tao, Dacheng  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 45(2023), 1 vom: 01. Jan., Seite 87-110  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:45  |g year:2023  |g number:1  |g day:01  |g month:01  |g pages:87-110 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2022.3152247  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 45  |j 2023  |e 1  |b 01  |c 01  |h 87-110