Deeply Supervised Discriminative Learning for Adversarial Defense

Deep neural networks can easily be fooled by an adversary with minuscule perturbations added to an input image. The existing defense techniques suffer greatly under white-box attack settings, where an adversary has full knowledge of the network and can iterate several times to find strong perturbati...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 43(2021), 9 vom: 09. Sept., Seite 3154-3166
1. Verfasser: Mustafa, Aamir (VerfasserIn)
Weitere Verfasser: Khan, Salman H, Hayat, Munawar, Goecke, Roland, Shen, Jianbing, Shao, Ling
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM307379906
003 DE-627
005 20231225125432.0
007 cr uuu---uuuuu
008 231225s2021 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2020.2978474  |2 doi 
028 5 2 |a pubmed24n1024.xml 
035 |a (DE-627)NLM307379906 
035 |a (NLM)32149623 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Mustafa, Aamir  |e verfasserin  |4 aut 
245 1 0 |a Deeply Supervised Discriminative Learning for Adversarial Defense 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 05.08.2021 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Deep neural networks can easily be fooled by an adversary with minuscule perturbations added to an input image. The existing defense techniques suffer greatly under white-box attack settings, where an adversary has full knowledge of the network and can iterate several times to find strong perturbations. We observe that the main reason for the existence of such vulnerabilities is the close proximity of different class samples in the learned feature space of deep models. This allows the model decisions to be completely changed by adding an imperceptible perturbation to the inputs. To counter this, we propose to class-wise disentangle the intermediate feature representations of deep networks, specifically forcing the features for each class to lie inside a convex polytope that is maximally separated from the polytopes of other classes. In this manner, the network is forced to learn distinct and distant decision regions for each class. We observe that this simple constraint on the features greatly enhances the robustness of learned models, even against the strongest white-box attacks, without degrading the classification performance on clean images. We report extensive evaluations in both black-box and white-box attack scenarios and show significant gains in comparison to state-of-the-art defenses 
650 4 |a Journal Article 
700 1 |a Khan, Salman H  |e verfasserin  |4 aut 
700 1 |a Hayat, Munawar  |e verfasserin  |4 aut 
700 1 |a Goecke, Roland  |e verfasserin  |4 aut 
700 1 |a Shen, Jianbing  |e verfasserin  |4 aut 
700 1 |a Shao, Ling  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 43(2021), 9 vom: 09. Sept., Seite 3154-3166  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:43  |g year:2021  |g number:9  |g day:09  |g month:09  |g pages:3154-3166 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2020.2978474  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 43  |j 2021  |e 9  |b 09  |c 09  |h 3154-3166