Learning to Explore Distillability and Sparsability : A Joint Framework for Model Compression

Deep learning shows excellent performance usually at the expense of heavy computation. Recently, model compression has become a popular way of reducing the computation. Compression can be achieved using knowledge distillation or filter pruning. Knowledge distillation improves the accuracy of a light...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 3 vom: 22. März, Seite 3378-3395
Auteur principal: Liu, Yufan (Auteur)
Autres auteurs: Cao, Jiajiong, Li, Bing, Hu, Weiming, Maybank, Stephen
Format: Article en ligne
Langue:English
Publié: 2023
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article