How Does Attention Work in Vision Transformers? A Visual Analytics Attempt

Vision transformer (ViT) expands the success of transformer models from sequential data to images. The model decomposes an image into many smaller patches and arranges them into a sequence. Multi-head self-attentions are then applied to the sequence to learn the attention between patches. Despite ma...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on visualization and computer graphics. - 1996. - 29(2023), 6 vom: 05. Juni, Seite 2888-2900
Auteur principal: Li, Yiran (Auteur)
Autres auteurs: Wang, Junpeng, Dai, Xin, Wang, Liang, Yeh, Chin-Chia Michael, Zheng, Yan, Zhang, Wei, Ma, Kwan-Liu
Format: Article en ligne
Langue:English
Publié: 2023
Accès à la collection:IEEE transactions on visualization and computer graphics
Sujets:Journal Article