Home  | Publications | KSK+25a

Approximate Posteriors in Neural Networks: A Sampling Perspective

MCML Authors

Abstract

The landscape of neural network loss functions is known to be highly complex, and the ability of gradient-based approaches to find well-generalizing solutions to such high-dimensional problems is often considered a miracle. Similarly, Bayesian neural networks (BNNs) inherit this complexity through the model's likelihood. In applications where BNNs are used to account for weight uncertainty, recent advantages in sampling-based inference (SAI) have shown promising results outperforming other approximate Bayesian inference (ABI) methods. In this work, we analyze the approximate posterior implicitly defined by SAI and uncover key insights into its success. Among other things, we demonstrate how SAI handles symmetries differently than ABI, and examine the role of overparameterization. Further, we investigate the characteristics of approximate posteriors with sampling budgets scaled far beyond previously studied limits and explain why the localized behavior of samplers does not inherently constitute a disadvantage.

inproceedings


FPI @ICLR 2025

Workshop on Frontiers in Probabilistic Inference: Learning meets Sampling at the 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. Spotlight Presentation.

Authors

J. KobialkaE. Sommer • J. Kwon • D. Dold • D. Rügamer

Links

URL

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: KSK+25a

Back to Top