Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias

Adversarial examples are inputs with imperceptible perturbations that easily mislead deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small and localized patch, has emerged for its easy feasibility in real-world scenarios. However, existing strategies failed to gene...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 01., Seite 598-611
1. Verfasser: Wang, Jiakai (VerfasserIn)
Weitere Verfasser: Liu, Aishan, Bai, Xiao, Liu, Xianglong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM333886984
003 DE-627
005 20231225222552.0
007 cr uuu---uuuuu
008 231225s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2021.3127849  |2 doi 
028 5 2 |a pubmed24n1112.xml 
035 |a (DE-627)NLM333886984 
035 |a (NLM)34851825 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Wang, Jiakai  |e verfasserin  |4 aut 
245 1 0 |a Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 24.12.2021 
500 |a Date Revised 24.12.2021 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Adversarial examples are inputs with imperceptible perturbations that easily mislead deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small and localized patch, has emerged for its easy feasibility in real-world scenarios. However, existing strategies failed to generate adversarial patches with strong generalization ability due to the ignorance of the inherent biases of models. In other words, the adversarial patches are always input-specific and fail to attack images from all classes or different models, especially unseen classes and black-box models. To address the problem, this paper proposes a bias-based framework to generate universal adversarial patches with strong generalization ability, which exploits the perceptual bias and attentional bias to improve the attacking ability. Regarding the perceptual bias, since DNNs are strongly biased towards textures, we exploit the hard examples which convey strong model uncertainties and extract a textural patch prior from them by adopting the style similarities. The patch prior is closer to decision boundaries and would promote attacks across classes. As for the attentional bias, motivated by the fact that different models share similar attention patterns towards the same image, we exploit this bias by confusing the model-shared similar attention patterns. Thus, the generated adversarial patches can obtain stronger transferability among different models. Taking Automatic Check-out (ACO) as the typical scenario, extensive experiments including white-box/black-box settings in both digital-world (RPC, the largest ACO related dataset) and physical-world scenario (Taobao and JD, the world's largest online shopping platforms) are conducted. Experimental results demonstrate that our proposed framework outperforms state-of-the-art adversarial patch attack methods 
650 4 |a Journal Article 
700 1 |a Liu, Aishan  |e verfasserin  |4 aut 
700 1 |a Bai, Xiao  |e verfasserin  |4 aut 
700 1 |a Liu, Xianglong  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 31(2022) vom: 01., Seite 598-611  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:31  |g year:2022  |g day:01  |g pages:598-611 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2021.3127849  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 31  |j 2022  |b 01  |h 598-611