Better Understanding Differences in Attribution Methods via Systematic Evaluations

Deep neural networks are very successful on many vision tasks, but hard to interpret due to their black box nature. To overcome this, various post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions. Evaluating such methods is challengi...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 6 vom: 12. Juni, Seite 4090-4101
Auteur principal: Rao, Sukrut (Auteur)
Autres auteurs: Bohle, Moritz, Schiele, Bernt
Format: Article en ligne
Langue:English
Publié: 2024
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM367058715
003 DE-627
005 20250305162444.0
007 cr uuu---uuuuu
008 240114s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3353528  |2 doi 
028 5 2 |a pubmed25n1223.xml 
035 |a (DE-627)NLM367058715 
035 |a (NLM)38215324 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Rao, Sukrut  |e verfasserin  |4 aut 
245 1 0 |a Better Understanding Differences in Attribution Methods via Systematic Evaluations 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 07.05.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Deep neural networks are very successful on many vision tasks, but hard to interpret due to their black box nature. To overcome this, various post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions. Evaluating such methods is challenging since no ground truth attributions exist. We thus propose three novel evaluation schemes to more reliably measure the faithfulness of those methods, to make comparisons between them more fair, and to make visual inspection more systematic. To address faithfulness, we propose a novel evaluation setting (DiFull) in which we carefully control which parts of the input can influence the output in order to distinguish possible from impossible attributions. To address fairness, we note that different methods are applied at different layers, which skews any comparison, and so evaluate all methods on the same layers (ML-Att) and discuss how this impacts their performance on quantitative metrics. For more systematic visualizations, we propose a scheme (AggAtt) to qualitatively evaluate the methods on complete datasets. We use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models. Finally, we propose a post-processing smoothing step that significantly improves the performance of some attribution methods, and discuss its applicability 
650 4 |a Journal Article 
700 1 |a Bohle, Moritz  |e verfasserin  |4 aut 
700 1 |a Schiele, Bernt  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 6 vom: 12. Juni, Seite 4090-4101  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnas 
773 1 8 |g volume:46  |g year:2024  |g number:6  |g day:12  |g month:06  |g pages:4090-4101 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3353528  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 6  |b 12  |c 06  |h 4090-4101