Encoding the Latent Posterior of Bayesian Neural Networks for Uncertainty Quantification

Bayesian Neural Networks (BNNs) have long been considered an ideal, yet unscalable solution for improving the robustness and the predictive uncertainty of deep neural networks. While they could capture more accurately the posterior distribution of the network parameters, most BNN approaches are eith...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 4 vom: 28. März, Seite 2027-2040
1. Verfasser: Franchi, Gianni (VerfasserIn)
Weitere Verfasser: Bursuc, Andrei, Aldea, Emanuel, Dubuisson, Severine, Bloch, Isabelle
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM363984399
003 DE-627
005 20240307232036.0
007 cr uuu---uuuuu
008 231226s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2023.3328829  |2 doi 
028 5 2 |a pubmed24n1319.xml 
035 |a (DE-627)NLM363984399 
035 |a (NLM)37906481 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Franchi, Gianni  |e verfasserin  |4 aut 
245 1 0 |a Encoding the Latent Posterior of Bayesian Neural Networks for Uncertainty Quantification 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 06.03.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Bayesian Neural Networks (BNNs) have long been considered an ideal, yet unscalable solution for improving the robustness and the predictive uncertainty of deep neural networks. While they could capture more accurately the posterior distribution of the network parameters, most BNN approaches are either limited to small networks or rely on constraining assumptions, e.g., parameter independence. These drawbacks have enabled prominence of simple, but computationally heavy approaches such as Deep Ensembles, whose training and testing costs increase linearly with the number of networks. In this work we aim for efficient deep BNNs amenable to complex computer vision architectures, e.g., ResNet-50 DeepLabv3+, and tasks, e.g., semantic segmentation and image classification, with fewer assumptions on the parameters. We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer. Our approach, called Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles. LP-BNNs attain competitive results across multiple metrics in several challenging benchmarks for image classification, semantic segmentation, and out-of-distribution detection 
650 4 |a Journal Article 
700 1 |a Bursuc, Andrei  |e verfasserin  |4 aut 
700 1 |a Aldea, Emanuel  |e verfasserin  |4 aut 
700 1 |a Dubuisson, Severine  |e verfasserin  |4 aut 
700 1 |a Bloch, Isabelle  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 4 vom: 28. März, Seite 2027-2040  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:46  |g year:2024  |g number:4  |g day:28  |g month:03  |g pages:2027-2040 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2023.3328829  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 4  |b 28  |c 03  |h 2027-2040