Improving Adversarial Robustness of Deep Neural Networks via Adaptive Margin Evolution
Adversarial training is the most popular and general strategy to improve Deep Neural Network (DNN) robustness against adversarial noises. Many adversarial training methods have been proposed in the past few years. However, most of these methods are highly susceptible to hyperparameters, especially t...
Veröffentlicht in: | Neurocomputing. - 1998. - 551(2023) vom: 28. Sept. |
---|---|
1. Verfasser: | |
Weitere Verfasser: | |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2023
|
Zugriff auf das übergeordnete Werk: | Neurocomputing |
Schlagworte: | Journal Article Deep neural networks adversarial robustness adversarial training hyperparameter-free optimal adversarial training sample |
Zusammenfassung: | Adversarial training is the most popular and general strategy to improve Deep Neural Network (DNN) robustness against adversarial noises. Many adversarial training methods have been proposed in the past few years. However, most of these methods are highly susceptible to hyperparameters, especially the training noise upper bound. Tuning these hyperparameters is expensive and difficult for people not in the adversarial robustness research domain, which prevents adversarial training techniques from being used in many application fields. In this study, we propose a new adversarial training method, named Adaptive Margin Evolution (AME). Besides being hyperparameter-free for the user, our AME method places adversarial training samples into the optimal locations in the input space by gradually expanding the exploration range with self-adaptive and gradient-aware step sizes. We evaluate AME and the other seven well-known adversarial training methods on three common benchmark datasets (CIFAR10, SVHN, and Tiny ImageNet) under the most challenging adversarial attack: AutoAttack. The results show that: (1) On the three datasets, AME has the best overall performance; (2) On the Tiny ImageNet dataset, which is much more challenging, AME has the best performance at every noise level. Our work may pave the way for adopting adversarial training techniques in application domains where hyperparameter-free methods are preferred |
---|---|
Beschreibung: | Date Revised 29.09.2024 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2023.126524 |