Toward Understanding and Boosting Adversarial Transferability From a Distribution Perspective

Transferable adversarial attacks against Deep neural networks (DNNs) have received broad attention in recent years. An adversarial example can be crafted by a surrogate model and then attack the unknown target model successfully, which brings a severe threat to DNNs. The exact underlying reasons for...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 04., Seite 6487-6501
1. Verfasser: Zhu, Yao (VerfasserIn)
Weitere Verfasser: Chen, Yuefeng, Li, Xiaodan, Chen, Kejiang, He, Yuan, Tian, Xiang, Zheng, Bolun, Chen, Yaowu, Huang, Qingming
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM347384994
003 DE-627
005 20231226033707.0
007 cr uuu---uuuuu
008 231226s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2022.3211736  |2 doi 
028 5 2 |a pubmed24n1157.xml 
035 |a (DE-627)NLM347384994 
035 |a (NLM)36223353 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Zhu, Yao  |e verfasserin  |4 aut 
245 1 0 |a Toward Understanding and Boosting Adversarial Transferability From a Distribution Perspective 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 25.10.2022 
500 |a Date Revised 25.10.2022 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Transferable adversarial attacks against Deep neural networks (DNNs) have received broad attention in recent years. An adversarial example can be crafted by a surrogate model and then attack the unknown target model successfully, which brings a severe threat to DNNs. The exact underlying reasons for the transferability are still not completely understood. Previous work mostly explores the causes from the model perspective, e.g., decision boundary, model architecture, and model capacity. Here, we investigate the transferability from the data distribution perspective and hypothesize that pushing the image away from its original distribution can enhance the adversarial transferability. To be specific, moving the image out of its original distribution makes different models hardly classify the image correctly, which benefits the untargeted attack, and dragging the image into the target distribution misleads the models to classify the image as the target class, which benefits the targeted attack. Towards this end, we propose a novel method that crafts adversarial examples by manipulating the distribution of the image. We conduct comprehensive transferable attacks against multiple DNNs to demonstrate the effectiveness of the proposed method. Our method can significantly improve the transferability of the crafted attacks and achieves state-of-the-art performance in both untargeted and targeted scenarios, surpassing the previous best method by up to 40% in some cases. In summary, our work provides new insight into studying adversarial transferability and provides a strong counterpart for future research on adversarial defense 
650 4 |a Journal Article 
700 1 |a Chen, Yuefeng  |e verfasserin  |4 aut 
700 1 |a Li, Xiaodan  |e verfasserin  |4 aut 
700 1 |a Chen, Kejiang  |e verfasserin  |4 aut 
700 1 |a He, Yuan  |e verfasserin  |4 aut 
700 1 |a Tian, Xiang  |e verfasserin  |4 aut 
700 1 |a Zheng, Bolun  |e verfasserin  |4 aut 
700 1 |a Chen, Yaowu  |e verfasserin  |4 aut 
700 1 |a Huang, Qingming  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 31(2022) vom: 04., Seite 6487-6501  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:31  |g year:2022  |g day:04  |g pages:6487-6501 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2022.3211736  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 31  |j 2022  |b 04  |h 6487-6501