Towards Lightweight Transformer Via Group-Wise Transformation for Vision-and-Language Tasks
Despite the exciting performance, Transformer is criticized for its excessive parameters and computation cost. However, compressing Transformer remains as an open problem due to its internal complexity of the layer designs, i.e., Multi-Head Attention (MHA) and Feed-Forward Network (FFN). To address...
Ausführliche Beschreibung
Bibliographische Detailangaben
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 11., Seite 3386-3398
|
1. Verfasser: |
Luo, Gen
(VerfasserIn) |
Weitere Verfasser: |
Zhou, Yiyi,
Sun, Xiaoshuai,
Wang, Yan,
Cao, Liujuan,
Wu, Yongjian,
Huang, Feiyue,
Ji, Rongrong |
Format: | Online-Aufsatz
|
Sprache: | English |
Veröffentlicht: |
2022
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|
Schlagworte: | Journal Article |