Adapting Few-Shot Classification via In-Process Defense

Most few-shot learning methods employ either adaptive approaches or parameter amortization techniques. However, their reliance on pre-trained models presents a significant vulnerability. When an attacker's trigger activates a hidden backdoor, it may result in the misclassification of images, pr...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 26., Seite 5232-5245
1. Verfasser: Yang, Xi (VerfasserIn)
Weitere Verfasser: Kong, Dechen, Lin, Ren, Wang, Nannan, Gao, Xinbo
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Most few-shot learning methods employ either adaptive approaches or parameter amortization techniques. However, their reliance on pre-trained models presents a significant vulnerability. When an attacker's trigger activates a hidden backdoor, it may result in the misclassification of images, profoundly affecting the model's performance. In our research, we explore adaptive defenses against backdoor attacks for few-shot learning. We introduce a specialized stochastic process tailored to task characteristics that safeguards the classification model against attack-induced incorrect feature extraction. This process functions during forward propagation and is thus termed an "in-process defense." Our method employs an adaptive strategy, effectively generating task-level representations, enabling rapid adaptation to pre-trained models, and proving effective in few-shot classification scenarios for countering backdoor attacks. We apply latent stochastic processes to approximate task distributions and derive task-level representations from the support set. This task-level representation guides feature extraction, leading to backdoor trigger mismatching and forming the foundation of our parameter defense strategy. Benchmark tests on Meta-Dataset reveal that our approach not only withstands backdoor attacks but also shows an improved adaptation in addressing few-shot classification tasks
Beschreibung:Date Revised 26.09.2024
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2024.3458858