Learning Bayesian Statistics

#147 Fast Approximate Inference without Convergence Worries, with Martin Ingram

Dec 12, 2025
Martin Ingram, a data scientist and Bayesian researcher known for his work on DADVI and contributions to PyMC, dives into fast approximate inference methods. He discusses how DADVI enhances speed and accuracy in Bayesian inference while maintaining model flexibility. The conversation covers recovering covariance estimates using linear response and contrasts deterministic optimization with stochastic methods. Martin also shares insights on the practical performance of DADVI across different models and hints at exciting future enhancements like GPU support and exploring normalizing flows.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

From Tennis Models To Fast Inference

  • Martin Ingram explained his journey from physics and deep learning to Bayesian inference, motivated by a slow tennis model in Stan.
  • That experience pushed him to research fast approximate inference and eventual work on DADVI.
INSIGHT

Why Mean-Field Gets Means but Misses Covariances

  • Mean-field variational inference often finds posterior means accurately but underestimates covariance when parameters are correlated.
  • The objective trades off entropy and expected log posterior, favoring high-density modes over full posterior spread.
ADVICE

Don't Trust Stochastic ADVI Blindly

  • Avoid blind trust in stochastic ADVI because tuning step sizes and detecting convergence can be fiddly and non-repeatable.
  • Use diagnostic checks and be prepared to monitor or tune optimizers when using stochastic VI.
Get the Snipd Podcast app to discover more snips from this episode
Get the app