Summit : Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations

Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural network predictions for images often focuses on explaining predictions for single images or neurons....

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 26(2020), 1 vom: 22. Jan., Seite 1096-1106
1. Verfasser: Hohman, Fred (VerfasserIn)
Weitere Verfasser: Park, Haekyu, Robinson, Caleb, Polo Chau, Duen Horng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article Research Support, U.S. Gov't, Non-P.H.S.
LEADER 01000naa a22002652 4500
001 NLM300527233
003 DE-627
005 20231225102722.0
007 cr uuu---uuuuu
008 231225s2020 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2019.2934659  |2 doi 
028 5 2 |a pubmed24n1001.xml 
035 |a (DE-627)NLM300527233 
035 |a (NLM)31443005 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Hohman, Fred  |e verfasserin  |4 aut 
245 1 0 |a Summit  |b Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations 
264 1 |c 2020 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 12.03.2020 
500 |a Date Revised 12.03.2020 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural network predictions for images often focuses on explaining predictions for single images or neurons. As predictions are often computed from millions of weights that are optimized over millions of images, such explanations can easily miss a bigger picture. We present Summit, an interactive system that scalably and systematically summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. Summit introduces two new scalable summarization techniques: (1) activation aggregation discovers important neurons, and (2) neuron-influence aggregation identifies relationships among such neurons. Summit combines these techniques to create the novel attribution graph that reveals and summarizes crucial neuron associations and substructures that contribute to a model's outcomes. Summit scales to large data, such as the ImageNet dataset with 1.2M images, and leverages neural network feature visualization and dataset examples to help users distill large, complex neural network models into compact, interactive visualizations. We present neural network exploration scenarios where Summit helps us discover multiple surprising insights into a prevalent, large-scale image classifier's learned representations and informs future neural network architecture design. The Summit visualization runs in modern web browsers and is open-sourced 
650 4 |a Journal Article 
650 4 |a Research Support, U.S. Gov't, Non-P.H.S. 
700 1 |a Park, Haekyu  |e verfasserin  |4 aut 
700 1 |a Robinson, Caleb  |e verfasserin  |4 aut 
700 1 |a Polo Chau, Duen Horng  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 26(2020), 1 vom: 22. Jan., Seite 1096-1106  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:26  |g year:2020  |g number:1  |g day:22  |g month:01  |g pages:1096-1106 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2019.2934659  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 26  |j 2020  |e 1  |b 22  |c 01  |h 1096-1106