MoE-Adapters++ : Towards More Efficient Continual Learning of Vision-Language Models via Dynamic Mixture-of-Experts Adapters

In this paper, we first propose MoE-Adapters, a parameter-efficient training framework to alleviate long-term forgetting issues in incremental learning with Vision-Language Models (VLM). Our MoE-Adapters leverages incrementally added routers to activate and integrate exclusive expert adapters from a...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - PP(2025) vom: 11. Aug.
Auteur principal: Yu, Jiazuo (Auteur)
Autres auteurs: Huang, Zichen, Zhuge, Yunzhi, Zhang, Lu, Hu, Ping, Wang, Dong, Lu, Huchuan, He, You
Format: Article en ligne
Langue:English
Publié: 2025
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article