ConceptExplainer : Interactive Explanation for Deep Neural Networks from a Concept Perspective

Traditional deep learning interpretability methods which are suitable for model users cannot explain network behaviors at the global level and are inflexible at providing fine-grained explanations. As a solution, concept-based explanations are gaining attention due to their human intuitiveness and t...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2022) vom: 26. Sept.
1. Verfasser: Huang, Jinbin (VerfasserIn)
Weitere Verfasser: Mishra, Aditi, Kwon, Bum Chul, Bryan, Chris
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM346717256
003 DE-627
005 20240217232100.0
007 cr uuu---uuuuu
008 231226s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2022.3209384  |2 doi 
028 5 2 |a pubmed24n1297.xml 
035 |a (DE-627)NLM346717256 
035 |a (NLM)36155466 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Huang, Jinbin  |e verfasserin  |4 aut 
245 1 0 |a ConceptExplainer  |b Interactive Explanation for Deep Neural Networks from a Concept Perspective 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 16.02.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a Traditional deep learning interpretability methods which are suitable for model users cannot explain network behaviors at the global level and are inflexible at providing fine-grained explanations. As a solution, concept-based explanations are gaining attention due to their human intuitiveness and their flexibility to describe both global and local model behaviors. Concepts are groups of similarly meaningful pixels that express a notion, embedded within the network's latent space and have commonly been hand-generated, but have recently been discovered by automated approaches. Unfortunately, the magnitude and diversity of discovered concepts makes it difficult to navigate and make sense of the concept space. Visual analytics can serve a valuable role in bridging these gaps by enabling structured navigation and exploration of the concept space to provide concept-based insights of model behavior to users. To this end, we design, develop, and validate CONCEPTEXPLAINER, a visual analytics system that enables people to interactively probe and explore the concept space to explain model behavior at the instance/class/global level. The system was developed via iterative prototyping to address a number of design challenges that model users face in interpreting the behavior of deep learning models. Via a rigorous user study, we validate how CONCEPTEXPLAINER supports these challenges. Likewise, we conduct a series of usage scenarios to demonstrate how the system supports the interactive analysis of model behavior across a variety of tasks and explanation granularities, such as identifying concepts that are important to classification, identifying bias in training data, and understanding how concepts can be shared across diverse and seemingly dissimilar classes 
650 4 |a Journal Article 
700 1 |a Mishra, Aditi  |e verfasserin  |4 aut 
700 1 |a Kwon, Bum Chul  |e verfasserin  |4 aut 
700 1 |a Bryan, Chris  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g PP(2022) vom: 26. Sept.  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:PP  |g year:2022  |g day:26  |g month:09 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2022.3209384  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2022  |b 26  |c 09