Learning Bayesian Statistics

BITESIZE | How to Make Your Models Faster, with Haavard Rue & Janet van Niekerk

Jul 16, 2025
Janet van Niekerk, a Bayesian statistician with a PhD focusing on Bayesian inference, joins Haavard Rue to unveil the game-changing Integrated Nasty-Laplace Approximations (INLA) method. They discuss how INLA vastly improves model speed and scalability for large datasets compared to traditional MCMC techniques. The duo dives into the intricacies of latent Gaussian models, their practical applications in fields like global health, and the rapid development of the rinla R package that enhances Bayesian analysis efficiency. Tune in for insights that could transform your statistical modeling!
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Deterministic Posterior Approximation

  • INLA approximates the full posterior deterministically instead of sampling like MCMC.
  • That deterministic approximation is why INLA is much faster for many models.
ANECDOTE

Switching From MCMC To Approximations

  • Haavard described switching from trying to make MCMC work to developing fast approximations.
  • He framed INLA's goal as making inference quick and scalable for large models and datasets.
INSIGHT

Latent Gaussian Model Is The Key

  • INLA is built for latent Gaussian models where fixed and random effects have a joint Gaussian prior.
  • Many common models (GAMs, time series, spatial, survival) fit this form, so INLA applies broadly.
Get the Snipd Podcast app to discover more snips from this episode
Get the app