

Learning Bayesian Statistics
Alexandre Andorra
Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is?
Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow.
When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible.
So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped.
But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners!
My name is Alex Andorra by the way, and I live in Estonia. By day, I'm a data scientist and modeler at the https://www.pymc-labs.io/ (PyMC Labs) consultancy. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages https://docs.pymc.io/ (PyMC) and https://arviz-devs.github.io/arviz/ (ArviZ). I also love https://www.pollsposition.com/ (election forecasting) and, most importantly, Nutella. But I don't like talking about it – I prefer eating it.
So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and https://www.patreon.com/learnbayesstats (unlock exclusive Bayesian swag on Patreon)!
Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow.
When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible.
So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped.
But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners!
My name is Alex Andorra by the way, and I live in Estonia. By day, I'm a data scientist and modeler at the https://www.pymc-labs.io/ (PyMC Labs) consultancy. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages https://docs.pymc.io/ (PyMC) and https://arviz-devs.github.io/arviz/ (ArviZ). I also love https://www.pollsposition.com/ (election forecasting) and, most importantly, Nutella. But I don't like talking about it – I prefer eating it.
So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and https://www.patreon.com/learnbayesstats (unlock exclusive Bayesian swag on Patreon)!
Episodes
Mentioned books

6 snips
Aug 13, 2025 • 21min
BITESIZE | What's Missing in Bayesian Deep Learning?
Yingzhen Li, a researcher specializing in Bayesian communication and uncertainty in neural networks, teams up with François-Xavier Briol, who focuses on machine learning tools for Bayesian statistics. They dive into the complexities of Bayesian deep learning, emphasizing uncertainty quantification and its role in effective modeling. The discussion covers the evolution of Bayesian models, simulation-based inference methods, and the urgent need for better computational tools to tackle high-dimensional challenges. Their insights on integrating machine learning with Bayesian approaches spark exciting possibilities in the field.

Aug 6, 2025 • 1h 23min
#138 Quantifying Uncertainty in Bayesian Deep Learning, Live from Imperial College London
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian deep learning is a growing field with many challenges.Current research focuses on applying Bayesian methods to neural networks.Diffusion methods are emerging as a new approach for uncertainty quantification.The integration of machine learning tools into Bayesian models is a key area of research.The complexity of Bayesian neural networks poses significant computational challenges.Future research will focus on improving methods for uncertainty quantification. Generalized Bayesian inference offers a more robust approach to uncertainty.Uncertainty quantification is crucial in fields like medicine and epidemiology.Detecting out-of-distribution examples is essential for model reliability.Exploration-exploitation trade-off is vital in reinforcement learning.Marginal likelihood can be misleading for model selection.The integration of Bayesian methods in LLMs presents unique challenges.Chapters:00:00 Introduction to Bayesian Deep Learning03:12 Panelist Introductions and Backgrounds10:37 Current Research and Challenges in Bayesian Deep Learning18:04 Contrasting Approaches: Bayesian vs. Machine Learning26:09 Tools and Techniques for Bayesian Deep Learning31:18 Innovative Methods in Uncertainty Quantification36:23 Generalized Bayesian Inference and Its Implications41:38 Robust Bayesian Inference and Gaussian Processes44:24 Software Development in Bayesian Statistics46:51 Understanding Uncertainty in Language Models50:03 Hallucinations in Language Models53:48 Bayesian Neural Networks vs Traditional Neural Networks58:00 Challenges with Likelihood Assumptions01:01:22 Practical Applications of Uncertainty Quantification01:04:33 Meta Decision-Making with Uncertainty01:06:50 Exploring Bayesian Priors in Neural Networks01:09:17 Model Complexity and Data Signal01:12:10 Marginal Likelihood and Model Selection01:15:03 Implementing Bayesian Methods in LLMs01:19:21 Out-of-Distribution Detection in LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen.Dr. Mélodie Monod (Imperial College London, School of Public Health)Mélodie completed her PhD as part of the EPSRC Modern Statistics and Statistical Machine Learning program at Imperial College London, transitioned to Novartis as Principal Biostatistician, and is currently a Postdoctoral Researcher in Machine Learning at Imperial. Her research includes diffusion models, Bayesian deep learning, non-parametric Bayesian statistics and pandemic modelling. For more details, see her Google Scholar Publications page.Dr. François-Xavier Briol (University College London, Department of Statistical Science) F-X is Associate Professor in the Department of Statistical Science at University College London, where he leads the Fundamentals of Statistical Machine Learning research group and is co-director of the UCL ELLIS unit. His research focuses on developing statistical and machine learning methods for the sciences and engineering, with his recent work focusing on Bayesian computation and robustness to model misspecification. For more details, see his Google Scholar page.Dr. Yingzhen Li (Imperial College London, Department of Computing)Yingzhen is Associate Professor in Machine Learning at the Department of Computing at Imperial College London, following several years at Microsoft Research Cambridge as senior researcher. Her research focuses on building reliable machine learning systems which can generalise to unseen environments, including topics such as (deep) probabilistic graphical model design, fast and accurate (Bayesian) inference/computation techniques, uncertainty quantification for computation and downstream tasks, and robust and adaptive machine learning systems. For more details, see her Google Scholar Publications page.TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Jul 30, 2025 • 25min
BITESIZE | Practical Applications of Causal AI with LLMs, with Robert Ness
Robert Ness, a Microsoft expert in causal assumptions, shares insights on the intersection of causal inference and deep learning. He emphasizes the importance of understanding causal concepts in statistical modeling. The conversation dives into the evolution of probabilistic machine learning and the impact of inductive biases on AI models. Notably, Ness elaborates on how large language models can formalize causal relationships, translating natural language into structured frameworks, making causal analysis more accessible and practical.

4 snips
Jul 23, 2025 • 1h 38min
#137 Causal AI & Generative Models, with Robert Ness
Robert Ness, a research scientist at Microsoft and faculty at Northeastern University, dives deep into Causal AI. He discusses the critical role of causal assumptions in statistical modeling and how they enhance decision-making processes. The integration of deep learning with causal models is explored, revealing new frontiers in AI. Furthermore, Ness emphasizes the necessity of statistical rigor when evaluating large language models and highlights practical applications and future directions for causal generative modeling in various fields.

Jul 16, 2025 • 18min
BITESIZE | How to Make Your Models Faster, with Haavard Rue & Janet van Niekerk
Today’s clip is from episode 136 of the podcast, with Haavard Rue & Janet van Niekerk.Alex, Haavard and Janet explore the world of Bayesian inference with INLA, a fast and deterministic method that revolutionizes how we handle large datasets and complex models. Discover the power of INLA, and why it can make your models go much faster! Get the full conversation here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

15 snips
Jul 9, 2025 • 1h 18min
#136 Bayesian Inference at Scale: Unveiling INLA, with Haavard Rue & Janet van Niekerk
Haavard Rue, a professor and the mastermind behind Integrated Nested Laplace Approximations (INLA), joins Janet van Niekerk, a research scientist specializing in its application to medical statistics. They dive into the advantages of INLA over traditional MCMC methods, highlighting its efficiency with large datasets. The conversation touches on computational challenges, the significance of carefully chosen priors, and the potential of integrating GPUs for future advancements. They also share insights on using INLA for complex models, particularly in healthcare and spatial analysis.

Jul 4, 2025 • 21min
BITESIZE | Understanding Simulation-Based Calibration, with Teemu Säilynoja
Teemu Säilynoja, an expert in simulation-based calibration and probabilistic programming, shares insights into the vital role of simulation-based calibration (SBC) in model validation. He discusses the challenges of developing SBC methods, focusing on the importance of prior and posterior analyses. The conversation dives into practical applications using tools like Stan and PyMC, and the significance of smart initialization in MCMC fitting. Teemu's expertise shines as he highlights strategies, including the Pathfinder approach, for navigating complex Bayesian models.

Jun 25, 2025 • 1h 12min
#135 Bayesian Calibration and Model Checking, with Teemu Säilynoja
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Teemu focuses on calibration assessments and predictive checking in Bayesian workflows.Simulation-based calibration (SBC) checks model implementationSBC involves drawing realizations from prior and generating prior predictive data.Visual predictive checking is crucial for assessing model predictions.Prior predictive checks should be done before looking at data.Posterior SBC focuses on the area of parameter space most relevant to the data.Challenges in SBC include inference time.Visualizations complement numerical metrics in Bayesian modeling.Amortized Bayesian inference benefits from SBC for quick posterior checks. The calibration of Bayesian models is more intuitive than Frequentist models.Choosing the right visualization depends on data characteristics.Using multiple visualization methods can reveal different insights.Visualizations should be viewed as models of the data.Goodness of fit tests can enhance visualization accuracy.Uncertainty visualization is crucial but often overlooked.Chapters:09:53 Understanding Simulation-Based Calibration (SBC)15:03 Practical Applications of SBC in Bayesian Modeling22:19 Challenges in Developing Posterior SBC29:41 The Role of SBC in Amortized Bayesian Inference33:47 The Importance of Visual Predictive Checking36:50 Predictive Checking and Model Fitting38:08 The Importance of Visual Checks40:54 Choosing Visualization Types49:06 Visualizations as Models55:02 Uncertainty Visualization in Bayesian Modeling01:00:05 Future Trends in Probabilistic ModelingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli.Links from the show:Teemu's website: https://teemusailynoja.github.io/Teemu on LinkedIn: https://www.linkedin.com/in/teemu-sailynoja/Teemu on GitHub: https://github.com/TeemuSailynojaBayesian Workflow group: https://users.aalto.fi/~ave/group.htmlLBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmittLBS #73 A Guide to Plotting Inferences & Uncertainties of Bayesian Models, with Jessica Hullman: https://learnbayesstats.com/episode/73-guide-plotting-inferences-uncertainties-bayesian-models-jessica-hullmanLBS #66 Uncertainty Visualization & Usable Stats, with Matthew Kay: https://learnbayesstats.com/episode/66-uncertainty-visualization-usable-stats-matthew-kayLBS #35 The Past, Present & Future of BRMS, with Paul Bürkner: https://learnbayesstats.com/episode/35-past-present-future-brms-paul-burknerLBS #29 Model Assessment, Non-Parametric Models, And Much More, with Aki Vehtari: https://learnbayesstats.com/episode/model-assessment-non-parametric-models-aki-vehtariPosterior SBC – Simulation-Based Calibration Checking Conditional on Data: https://arxiv.org/abs/2502.03279Recommendations for visual predictive checks in Bayesian workflow: https://teemusailynoja.github.io/visual-predictive-checks/Simuk, SBC for PyMC: https://simuk.readthedocs.io/en/latest/SBC, tools for model validation in R: https://hyunjimoon.github.io/SBC/index.html New ArviZ, Prior and Posterior predictive checks: https://arviz-devs.github.io/EABM/Chapters/Prior_posterior_predictive_checks.htmlBayesplot, plotting for Bayesian models in R: https://mc-stan.org/bayesplot/TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Jun 19, 2025 • 3min
Live Show Announcement | Come Meet Me in London!
Join a lively discussion about uncertainty quantification in statistical models, focusing on the challenges and realities of building reliable models. Explore why overconfident models can lead to failures in production. Discover useful tools and frameworks that help tackle these issues. Experts will share insights on how we need to rethink our approach to achieve robust machine learning over the next decade. Get ready for an engaging session filled with hard questions and practical wisdom!

Jun 18, 2025 • 15min
BITESIZE | Exploring Dynamic Regression Models, with David Kohns
In this engaging discussion, David Kohns, a researcher at Aalto University specializing in probabilistic programming, shares his insights on the future of Bayesian statistics. He explores the complexities of time series modeling and the significance of setting informative priors. The conversation highlights innovative tools like normalizing flows that streamline Bayesian inference. David also delves into the intricate relationship between AI and prior elicitation, making Bayesian methods more accessible while maintaining the need for practical understanding.