Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry (Extended Abstract)

Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry (Extended Abstract)

Jonas Gregor Wiese, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Guennemann, David Ruegamer

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 8466-8470. https://doi.org/10.24963/ijcai.2024/943

Bayesian inference in deep neural networks is challenging due to the high-dimensional, strongly multi-modal parameter posterior density landscape. Markov chain Monte Carlo approaches asymptotically recover the true posterior but are considered prohibitively expensive for large modern architectures. We argue that the dilemma between exact-but-unaffordable and cheap-but-inexact approaches can be mitigated by exploiting symmetries in the posterior landscape. We show theoretically that the posterior predictive density in Bayesian neural networks can be restricted to a symmetry-free parameter reference set. By further deriving an upper bound on the number of Monte Carlo chains required to capture the functional diversity, we propose a straightforward approach for feasible Bayesian inference.
Keywords:
Machine Learning: ML: Bayesian learning
Machine Learning: ML: Probabilistic machine learning
Uncertainty in AI: UAI: Inference
Uncertainty in AI: UAI: Tractable probabilistic models