Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is c...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2024) vom: 09. Sept.
1. Verfasser: Kumar, Atul (VerfasserIn)
Weitere Verfasser: Garg, Siddharth, Dutta, Soumya
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM377374830
003 DE-627
005 20240910233826.0
007 cr uuu---uuuuu
008 240910s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2024.3456360  |2 doi 
028 5 2 |a pubmed24n1529.xml 
035 |a (DE-627)NLM377374830 
035 |a (NLM)39250384 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Kumar, Atul  |e verfasserin  |4 aut 
245 1 0 |a Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 10.09.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertaintyaware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of non-trivial vector field data sets 
650 4 |a Journal Article 
700 1 |a Garg, Siddharth  |e verfasserin  |4 aut 
700 1 |a Dutta, Soumya  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g PP(2024) vom: 09. Sept.  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:09  |g month:09 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2024.3456360  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 09  |c 09