Image Super-Resolution as a Defense Against Adversarial Attacks

Convolutional Neural Networks have achieved significant success across multiple computer vision tasks. However, they are vulnerable to carefully crafted, human-imperceptible adversarial noise patterns which constrain their deployment in critical security-sensitive systems. This paper proposes a comp...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2019) vom: 19. Sept.
1. Verfasser: Mustafa, Aamir (VerfasserIn)
Weitere Verfasser: Khan, Salman H, Hayat, Munawar, Shen, Jianbing, Shao, Ling
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM301536635
003 DE-627
005 20240229162333.0
007 cr uuu---uuuuu
008 231225s2019 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2019.2940533  |2 doi 
028 5 2 |a pubmed24n1308.xml 
035 |a (DE-627)NLM301536635 
035 |a (NLM)31545722 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Mustafa, Aamir  |e verfasserin  |4 aut 
245 1 0 |a Image Super-Resolution as a Defense Against Adversarial Attacks 
264 1 |c 2019 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 27.02.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a Convolutional Neural Networks have achieved significant success across multiple computer vision tasks. However, they are vulnerable to carefully crafted, human-imperceptible adversarial noise patterns which constrain their deployment in critical security-sensitive systems. This paper proposes a computationally efficient image enhancement approach that provides a strong defense mechanism to effectively mitigate the effect of such adversarial perturbations. We show that deep image restoration networks learn mapping functions that can bring off-the-manifold adversarial samples onto the natural image manifold, thus restoring classification towards correct classes. A distinguishing feature of our approach is that, in addition to providing robustness against attacks, it simultaneously enhances image quality and retains models performance on clean images. Furthermore, the proposed method does not modify the classifier or requires a separate mechanism to detect adversarial images. The effectiveness of the scheme has been demonstrated through extensive experiments, where it has proven a strong defense in gray-box settings. The proposed scheme is simple and has the following advantages: (1) it does not require any model training or parameter optimization, (2) it complements other existing defense mechanisms, (3) it is agnostic to the attacked model and attack type and (4) it provides superior performance across all popular attack algorithms. Our codes are publicly available at https://github.com/aamir-mustafa/super-resolution-adversarial-defense 
650 4 |a Journal Article 
700 1 |a Khan, Salman H  |e verfasserin  |4 aut 
700 1 |a Hayat, Munawar  |e verfasserin  |4 aut 
700 1 |a Shen, Jianbing  |e verfasserin  |4 aut 
700 1 |a Shao, Ling  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g (2019) vom: 19. Sept.  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g year:2019  |g day:19  |g month:09 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2019.2940533  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |j 2019  |b 19  |c 09