Analyzing the Noise Robustness of Deep Neural Networks

Adversarial examples, generated by adding small but intentionally imperceptible perturbations to normal examples, can mislead deep neural networks (DNNs) to make incorrect predictions. Although much work has been done on both adversarial attack and defense, a fine-grained understanding of adversaria...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 27(2021), 7 vom: 22. Juli, Seite 3289-3304
1. Verfasser: Cao, Kelei (VerfasserIn)
Weitere Verfasser: Liu, Mengchen, Su, Hang, Wu, Jing, Zhu, Jun, Liu, Shixia
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM30581950X
003 DE-627
005 20231225122014.0
007 cr uuu---uuuuu
008 231225s2021 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2020.2969185  |2 doi 
028 5 2 |a pubmed24n1019.xml 
035 |a (DE-627)NLM30581950X 
035 |a (NLM)31985427 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Cao, Kelei  |e verfasserin  |4 aut 
245 1 0 |a Analyzing the Noise Robustness of Deep Neural Networks 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 29.09.2021 
500 |a Date Revised 29.09.2021 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Adversarial examples, generated by adding small but intentionally imperceptible perturbations to normal examples, can mislead deep neural networks (DNNs) to make incorrect predictions. Although much work has been done on both adversarial attack and defense, a fine-grained understanding of adversarial examples is still lacking. To address this issue, we present a visual analysis method to explain why adversarial examples are misclassified. The key is to compare and analyze the datapaths of both the adversarial and normal examples. A datapath is a group of critical neurons along with their connections. We formulate the datapath extraction as a subset selection problem and solve it by constructing and training a neural network. A multi-level visualization consisting of a network-level visualization of data flows, a layer-level visualization of feature maps, and a neuron-level visualization of learned features, has been designed to help investigate how datapaths of adversarial and normal examples diverge and merge in the prediction process. A quantitative evaluation and a case study were conducted to demonstrate the promise of our method to explain the misclassification of adversarial examples 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Liu, Mengchen  |e verfasserin  |4 aut 
700 1 |a Su, Hang  |e verfasserin  |4 aut 
700 1 |a Wu, Jing  |e verfasserin  |4 aut 
700 1 |a Zhu, Jun  |e verfasserin  |4 aut 
700 1 |a Liu, Shixia  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 27(2021), 7 vom: 22. Juli, Seite 3289-3304  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:27  |g year:2021  |g number:7  |g day:22  |g month:07  |g pages:3289-3304 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2020.2969185  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 27  |j 2021  |e 7  |b 22  |c 07  |h 3289-3304