Learning to Explore Distillability and Sparsability : A Joint Framework for Model Compression
Deep learning shows excellent performance usually at the expense of heavy computation. Recently, model compression has become a popular way of reducing the computation. Compression can be achieved using knowledge distillation or filter pruning. Knowledge distillation improves the accuracy of a light...
Ausführliche Beschreibung
Bibliographische Detailangaben
| Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 3 vom: 22. März, Seite 3378-3395
|
| 1. Verfasser: |
Liu, Yufan
(VerfasserIn) |
| Weitere Verfasser: |
Cao, Jiajiong,
Li, Bing,
Hu, Weiming,
Maybank, Stephen |
| Format: | Online-Aufsatz
|
| Sprache: | English |
| Veröffentlicht: |
2023
|
| Zugriff auf das übergeordnete Werk: | IEEE transactions on pattern analysis and machine intelligence
|
| Schlagworte: | Journal Article |