PSLT : A Light-Weight Vision Transformer With Ladder Self-Attention and Progressive Shift
Vision Transformer (ViT) has shown great potential for various visual tasks due to its ability to model long-range dependency. However, ViT requires a large amount of computing resource to compute the global self-attention. In this work, we propose a ladder self-attention block with multiple branche...
Publié dans: | IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 9 vom: 05. Sept., Seite 11120-11135 |
---|---|
Auteur principal: | |
Autres auteurs: | , , |
Format: | Article en ligne |
Langue: | English |
Publié: |
2023
|
Accès à la collection: | IEEE transactions on pattern analysis and machine intelligence |
Sujets: | Journal Article |
Accès en ligne |
Volltext |