Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias
Adversarial examples are inputs with imperceptible perturbations that easily mislead deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small and localized patch, has emerged for its easy feasibility in real-world scenarios. However, existing strategies failed to gene...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 01., Seite 598-611 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2022
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Adversarial examples are inputs with imperceptible perturbations that easily mislead deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small and localized patch, has emerged for its easy feasibility in real-world scenarios. However, existing strategies failed to generate adversarial patches with strong generalization ability due to the ignorance of the inherent biases of models. In other words, the adversarial patches are always input-specific and fail to attack images from all classes or different models, especially unseen classes and black-box models. To address the problem, this paper proposes a bias-based framework to generate universal adversarial patches with strong generalization ability, which exploits the perceptual bias and attentional bias to improve the attacking ability. Regarding the perceptual bias, since DNNs are strongly biased towards textures, we exploit the hard examples which convey strong model uncertainties and extract a textural patch prior from them by adopting the style similarities. The patch prior is closer to decision boundaries and would promote attacks across classes. As for the attentional bias, motivated by the fact that different models share similar attention patterns towards the same image, we exploit this bias by confusing the model-shared similar attention patterns. Thus, the generated adversarial patches can obtain stronger transferability among different models. Taking Automatic Check-out (ACO) as the typical scenario, extensive experiments including white-box/black-box settings in both digital-world (RPC, the largest ACO related dataset) and physical-world scenario (Taobao and JD, the world's largest online shopping platforms) are conducted. Experimental results demonstrate that our proposed framework outperforms state-of-the-art adversarial patch attack methods |
---|---|
Beschreibung: | Date Completed 24.12.2021 Date Revised 24.12.2021 published: Print-Electronic Citation Status MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2021.3127849 |