Progressive Diversified Augmentation for General Robustness of DNNs : A Unified Approach

Adversarial images are imperceptible perturbations to mislead deep neural networks (DNNs), which have attracted great attention in recent years. Although several defense strategies achieved encouraging robustness against adversarial samples, most of them still failed to consider the robustness on co...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 26., Seite 8955-8967
1. Verfasser: Yu, Hang (VerfasserIn)
Weitere Verfasser: Liu, Aishan, Li, Gengchao, Yang, Jichen, Zhang, Chongzhi
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Adversarial images are imperceptible perturbations to mislead deep neural networks (DNNs), which have attracted great attention in recent years. Although several defense strategies achieved encouraging robustness against adversarial samples, most of them still failed to consider the robustness on common corruptions (e.g. noise, blur, and weather/digital effects). To address this problem, we propose a simple yet effective method, named Progressive Diversified Augmentation (PDA), which improves the robustness of DNNs by progressively injecting diverse adversarial noises during training. In other words, DNNs trained with PDA achieve better general robustness against both adversarial attacks and common corruptions than other strategies. In addition, PDA also enjoys the advantages of spending less training time and keeping high standard accuracy on clean examples. Further, we theoretically prove that PDA can control the perturbation bound and guarantee better robustness. Extensive results on CIFAR-10, SVHN, ImageNet, CIFAR-10-C and ImageNet-C have demonstrated that PDA comprehensively outperforms its counterparts on the robustness against adversarial examples and common corruptions as well as clean images. More experiments on the frequency-based perturbations and visualized gradients further prove that PDA achieves general robustness and is more aligned with the human visual system
Beschreibung:Date Completed 10.12.2021
Date Revised 14.12.2021
published: Print-Electronic
Citation Status MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2021.3121150