Toward Visual Distortion in Black-Box Attacks

Constructing adversarial examples in a black-box threat model injures the original images by introducing visual distortion. In this paper, we propose a novel black-box attack approach that can directly minimize the induced distortion by learning the noise distribution of the adversarial example, ass...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 02., Seite 6156-6167
1. Verfasser: Li, Nannan (VerfasserIn)
Weitere Verfasser: Chen, Zhenzhong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Constructing adversarial examples in a black-box threat model injures the original images by introducing visual distortion. In this paper, we propose a novel black-box attack approach that can directly minimize the induced distortion by learning the noise distribution of the adversarial example, assuming only loss-oracle access to the black-box network. To quantify visual distortion, the perceptual distance between the adversarial example and the original image, is introduced in our loss. We first approximate the gradient of the corresponding non-differentiable loss function by sampling noise from the learned noise distribution. Then the distribution is updated using the estimated gradient to reduce visual distortion. The learning continues until an adversarial example is found. We validate the effectiveness of our attack on ImageNet. Our attack results in much lower distortion when compared to the state-of-the-art black-box attacks and achieves 100% success rate on InceptionV3, ResNet50 and VGG16bn. Furthermore, we theoretically prove the convergence of our model. The code is publicly available at https://github.com/Alina-1997/visual-distortion-in-attack
Beschreibung:Date Revised 12.07.2021
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2021.3092822