Improving Adversarial Robustness of Deep Neural Networks via Adaptive Margin Evolution

Adversarial training is the most popular and general strategy to improve Deep Neural Network (DNN) robustness against adversarial noises. Many adversarial training methods have been proposed in the past few years. However, most of these methods are highly susceptible to hyperparameters, especially t...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing. - 1998. - 551(2023) vom: 28. Sept.
1. Verfasser: Ma, Linhai (VerfasserIn)
Weitere Verfasser: Liang, Liang
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:Neurocomputing
Schlagworte:Journal Article Deep neural networks adversarial robustness adversarial training hyperparameter-free optimal adversarial training sample
LEADER 01000caa a22002652 4500
001 NLM360871003
003 DE-627
005 20240929232150.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1016/j.neucom.2023.126524  |2 doi 
028 5 2 |a pubmed24n1552.xml 
035 |a (DE-627)NLM360871003 
035 |a (NLM)37587916 
035 |a (PII)126524 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Ma, Linhai  |e verfasserin  |4 aut 
245 1 0 |a Improving Adversarial Robustness of Deep Neural Networks via Adaptive Margin Evolution 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 29.09.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Adversarial training is the most popular and general strategy to improve Deep Neural Network (DNN) robustness against adversarial noises. Many adversarial training methods have been proposed in the past few years. However, most of these methods are highly susceptible to hyperparameters, especially the training noise upper bound. Tuning these hyperparameters is expensive and difficult for people not in the adversarial robustness research domain, which prevents adversarial training techniques from being used in many application fields. In this study, we propose a new adversarial training method, named Adaptive Margin Evolution (AME). Besides being hyperparameter-free for the user, our AME method places adversarial training samples into the optimal locations in the input space by gradually expanding the exploration range with self-adaptive and gradient-aware step sizes. We evaluate AME and the other seven well-known adversarial training methods on three common benchmark datasets (CIFAR10, SVHN, and Tiny ImageNet) under the most challenging adversarial attack: AutoAttack. The results show that: (1) On the three datasets, AME has the best overall performance; (2) On the Tiny ImageNet dataset, which is much more challenging, AME has the best performance at every noise level. Our work may pave the way for adopting adversarial training techniques in application domains where hyperparameter-free methods are preferred 
650 4 |a Journal Article 
650 4 |a Deep neural networks 
650 4 |a adversarial robustness 
650 4 |a adversarial training 
650 4 |a hyperparameter-free 
650 4 |a optimal adversarial training sample 
700 1 |a Liang, Liang  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t Neurocomputing  |d 1998  |g 551(2023) vom: 28. Sept.  |w (DE-627)NLM098202456  |x 0925-2312  |7 nnns 
773 1 8 |g volume:551  |g year:2023  |g day:28  |g month:09 
856 4 0 |u http://dx.doi.org/10.1016/j.neucom.2023.126524  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 551  |j 2023  |b 28  |c 09