Pruning Self-Attentions Into Convolutional Layers in Single Path

Vision Transformers (ViTs) have achieved impressive performance over various computer vision tasks. However, modeling global correlations with multi-head self-attention (MSA) layers leads to two widely recognized issues: the massive computational resource consumption and the lack of intrinsic induct...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 5 vom: 05. Mai, Seite 3910-3922
Auteur principal: He, Haoyu (Auteur)
Autres auteurs: Cai, Jianfei, Liu, Jing, Pan, Zizheng, Zhang, Jing, Tao, Dacheng, Zhuang, Bohan
Format: Article en ligne
Langue:English
Publié: 2024
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM367316080
003 DE-627
005 20250305165923.0
007 cr uuu---uuuuu
008 240120s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3355890  |2 doi 
028 5 2 |a pubmed25n1223.xml 
035 |a (DE-627)NLM367316080 
035 |a (NLM)38241113 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a He, Haoyu  |e verfasserin  |4 aut 
245 1 0 |a Pruning Self-Attentions Into Convolutional Layers in Single Path 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 03.04.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Vision Transformers (ViTs) have achieved impressive performance over various computer vision tasks. However, modeling global correlations with multi-head self-attention (MSA) layers leads to two widely recognized issues: the massive computational resource consumption and the lack of intrinsic inductive bias for modeling local visual patterns. To solve both issues, we devise a simple yet effective method named Single-Path Vision Transformer pruning (SPViT), to efficiently and automatically compress the pre-trained ViTs into compact models with proper locality added. Specifically, we first propose a novel weight-sharing scheme between MSA and convolutional operations, delivering a single-path space to encode all candidate operations. In this way, we cast the operation search problem as finding which subset of parameters to use in each MSA layer, which significantly reduces the computational cost and optimization difficulty, and the convolution kernels can be well initialized using pre-trained MSA parameters. Relying on the single-path space, we introduce learnable binary gates to encode the operation choices in MSA layers. Similarly, we further employ learnable gates to encode the fine-grained MLP expansion ratios of FFN layers. In this way, our SPViT optimizes the learnable gates to automatically explore from a vast and unified search space and flexibly adjust the MSA-FFN pruning proportions for each individual dense model. We conduct extensive experiments on two representative ViTs showing that our SPViT achieves a new SOTA for pruning on ImageNet-1 k. For example, our SPViT can trim 52.0% FLOPs for DeiT-B and get an impressive 0.6% top-1 accuracy gain simultaneously 
650 4 |a Journal Article 
700 1 |a Cai, Jianfei  |e verfasserin  |4 aut 
700 1 |a Liu, Jing  |e verfasserin  |4 aut 
700 1 |a Pan, Zizheng  |e verfasserin  |4 aut 
700 1 |a Zhang, Jing  |e verfasserin  |4 aut 
700 1 |a Tao, Dacheng  |e verfasserin  |4 aut 
700 1 |a Zhuang, Bohan  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 5 vom: 05. Mai, Seite 3910-3922  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnas 
773 1 8 |g volume:46  |g year:2024  |g number:5  |g day:05  |g month:05  |g pages:3910-3922 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3355890  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 5  |b 05  |c 05  |h 3910-3922