Learning Bayesian Statistics

BITESIZE | Making Variational Inference Reliable: From ADVI to DADVI

Dec 17, 2025
Martin Ingram, a researcher known for his work on reliable variational inference, shares valuable insights on ADVI and DADVI. He discusses the allure and pitfalls of ADVI, emphasizing tuning challenges and convergence issues. The conversation digs into the advantages and drawbacks of mean-field variational inference and introduces the innovative linear response technique for covariance estimation. Martin also contrasts stochastic and deterministic optimization, revealing how DADVI's fixed-draw method can enhance reliability while acknowledging the trade-offs involved.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Speeding Up A Tennis Prediction Model

  • Martin Ingram described building a fast tennis prediction model and avoiding slow MCMC workflows early in his PhD.
  • He explored approximate inference to speed up fitting Stan/PyMC models without rewriting them into tailored algorithms.
INSIGHT

ADVI Gives Good Means, Not Always Variance

  • ADVI promises black-box variational inference that gives good posterior means quickly but often underestimates variance.
  • Means are often sufficient for practical tasks, while full posterior covariance from MCMC is powerful but not always necessary.
ADVICE

Tune ADVI's Stochastic Optimization

  • Tune step sizes and monitor convergence when using stochastic ADVI because stochastic optimizers like Adam need careful settings.
  • Avoid treating ADVI as fully automatic; expect run-to-run variability and check convergence diagnostics manually.
Get the Snipd Podcast app to discover more snips from this episode
Get the app