Adaptive Perturbation for Adversarial Attack

In recent years, the security of deep learning models achieves more and more attentions with the rapid development of neural networks, which are vulnerable to adversarial examples. Almost all existing gradient-based attack methods use the sign function in the generation to meet the requirement of pe...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 8 vom: 19. Juli, Seite 5663-5676
1. Verfasser: Yuan, Zheng (VerfasserIn)
Weitere Verfasser: Zhang, Jie, Jiang, Zhaoyan, Li, Liangliang, Shan, Shiguang
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM368670252
003 DE-627
005 20240703234502.0
007 cr uuu---uuuuu
008 240222s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3367773  |2 doi 
028 5 2 |a pubmed24n1459.xml 
035 |a (DE-627)NLM368670252 
035 |a (NLM)38376968 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Yuan, Zheng  |e verfasserin  |4 aut 
245 1 0 |a Adaptive Perturbation for Adversarial Attack 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 03.07.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a In recent years, the security of deep learning models achieves more and more attentions with the rapid development of neural networks, which are vulnerable to adversarial examples. Almost all existing gradient-based attack methods use the sign function in the generation to meet the requirement of perturbation budget on L∞ norm. However, we find that the sign function may be improper for generating adversarial examples since it modifies the exact gradient direction. Instead of using the sign function, we propose to directly utilize the exact gradient direction with a scaling factor for generating adversarial perturbations, which improves the attack success rates of adversarial examples even with fewer perturbations. At the same time, we also theoretically prove that this method can achieve better black-box transferability. Moreover, considering that the best scaling factor varies across different images, we propose an adaptive scaling factor generator to seek an appropriate scaling factor for each image, which avoids the computational cost for manually searching the scaling factor. Our method can be integrated with almost all existing gradient-based attack methods to further improve their attack success rates. Extensive experiments on the CIFAR10 and ImageNet datasets show that our method exhibits higher transferability and outperforms the state-of-the-art methods 
650 4 |a Journal Article 
700 1 |a Zhang, Jie  |e verfasserin  |4 aut 
700 1 |a Jiang, Zhaoyan  |e verfasserin  |4 aut 
700 1 |a Li, Liangliang  |e verfasserin  |4 aut 
700 1 |a Shan, Shiguang  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 8 vom: 19. Juli, Seite 5663-5676  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:46  |g year:2024  |g number:8  |g day:19  |g month:07  |g pages:5663-5676 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3367773  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 8  |b 19  |c 07  |h 5663-5676