Home  | Publications | KSK+25

Approximate Posteriors in Neural Networks: A Sampling Perspective

MCML Authors

Abstract

The landscape of neural network loss functions is known to be highly complex, and the ability of gradient-based approaches to find well-generalizing solutions to such high-dimensional problems is often considered a miracle. Similarly, Bayesian neural networks (BNNs) inherit this complexity through the model's likelihood. In applications where BNNs are used to account for weight uncertainty, recent advantages in sampling-based inference (SAI) have shown promising results outperforming other approximate Bayesian inference (ABI) methods. In this work, we analyze the approximate posterior implicitly defined by SAI and uncover key insights into its success. Among other things, we demonstrate how SAI handles symmetries differently than ABI, and examine the role of overparameterization. Further, we investigate the characteristics of approximate posteriors with sampling budgets scaled far beyond previously studied limits and explain why the localized behavior of samplers does not inherently constitute a disadvantage.

inproceedings


AABI 2025

7th Symposium on Advances in Approximate Bayesian Inference collocated with the 13th International Conference on Learning Representations. Singapore, Apr 29, 2025. To be published. Preprint available.

Authors

J. KobialkaE. Sommer • J. Kwon • D. Dold • D. Rügamer

Links

URL

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: KSK+25

Back to Top