Adapting Few-Shot Classification via In-Process Defense

Most few-shot learning methods employ either adaptive approaches or parameter amortization techniques. However, their reliance on pre-trained models presents a significant vulnerability. When an attacker's trigger activates a hidden backdoor, it may result in the misclassification of images, pr...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 17., Seite 5232-5245
1. Verfasser: Yang, Xi (VerfasserIn)
Weitere Verfasser: Kong, Dechen, Lin, Ren, Wang, Nannan, Gao, Xinbo
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM377750735
003 DE-627
005 20250306161653.0
007 cr uuu---uuuuu
008 240918s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2024.3458858  |2 doi 
028 5 2 |a pubmed25n1258.xml 
035 |a (DE-627)NLM377750735 
035 |a (NLM)39288044 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Yang, Xi  |e verfasserin  |4 aut 
245 1 0 |a Adapting Few-Shot Classification via In-Process Defense 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 26.09.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Most few-shot learning methods employ either adaptive approaches or parameter amortization techniques. However, their reliance on pre-trained models presents a significant vulnerability. When an attacker's trigger activates a hidden backdoor, it may result in the misclassification of images, profoundly affecting the model's performance. In our research, we explore adaptive defenses against backdoor attacks for few-shot learning. We introduce a specialized stochastic process tailored to task characteristics that safeguards the classification model against attack-induced incorrect feature extraction. This process functions during forward propagation and is thus termed an "in-process defense." Our method employs an adaptive strategy, effectively generating task-level representations, enabling rapid adaptation to pre-trained models, and proving effective in few-shot classification scenarios for countering backdoor attacks. We apply latent stochastic processes to approximate task distributions and derive task-level representations from the support set. This task-level representation guides feature extraction, leading to backdoor trigger mismatching and forming the foundation of our parameter defense strategy. Benchmark tests on Meta-Dataset reveal that our approach not only withstands backdoor attacks but also shows an improved adaptation in addressing few-shot classification tasks 
650 4 |a Journal Article 
700 1 |a Kong, Dechen  |e verfasserin  |4 aut 
700 1 |a Lin, Ren  |e verfasserin  |4 aut 
700 1 |a Wang, Nannan  |e verfasserin  |4 aut 
700 1 |a Gao, Xinbo  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 33(2024) vom: 17., Seite 5232-5245  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnas 
773 1 8 |g volume:33  |g year:2024  |g day:17  |g pages:5232-5245 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2024.3458858  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 33  |j 2024  |b 17  |h 5232-5245