

BITESIZE | What's Missing in Bayesian Deep Learning?
6 snips Aug 13, 2025
Yingzhen Li, a researcher specializing in Bayesian communication and uncertainty in neural networks, teams up with François-Xavier Briol, who focuses on machine learning tools for Bayesian statistics. They dive into the complexities of Bayesian deep learning, emphasizing uncertainty quantification and its role in effective modeling. The discussion covers the evolution of Bayesian models, simulation-based inference methods, and the urgent need for better computational tools to tackle high-dimensional challenges. Their insights on integrating machine learning with Bayesian approaches spark exciting possibilities in the field.
AI Snips
Chapters
Transcript
Episode notes
Uncertainty Still Matters With Huge Models
- Large pretrained models raise new questions about the need for uncertainty quantification in neural networks.
- Researchers now explore Bayesian techniques during fine-tuning or prompting to capture remaining ambiguity.
Challenges In Priors And Expectations
- Building complex Bayesian models must handle structured and high-dimensional covariates while giving principled uncertainty.
- Neural networks can provide inductive bias but make prior specification and posterior expectation hard to compute.
Anchor Priors To Expert Knowledge
- Specify priors from expert knowledge whenever possible to make Bayesian deep models principled.
- Also aim for scalable inference and theoretical guarantees when designing Bayesian deep-learning systems.