
Learning Bayesian Statistics
Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is?
Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow.
When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible.
So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped.
But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners!
My name is Alex Andorra by the way, and I live in Estonia. By day, I'm a data scientist and modeler at the https://www.pymc-labs.io/ (PyMC Labs) consultancy. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages https://docs.pymc.io/ (PyMC) and https://arviz-devs.github.io/arviz/ (ArviZ). I also love https://www.pollsposition.com/ (election forecasting) and, most importantly, Nutella. But I don't like talking about it – I prefer eating it.
So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and https://www.patreon.com/learnbayesstats (unlock exclusive Bayesian swag on Patreon)!
Latest episodes

Jul 4, 2025 • 21min
BITESIZE | Understanding Simulation-Based Calibration, with Teemu Säilynoja
Teemu Säilynoja, an expert in simulation-based calibration and probabilistic programming, shares insights into the vital role of simulation-based calibration (SBC) in model validation. He discusses the challenges of developing SBC methods, focusing on the importance of prior and posterior analyses. The conversation dives into practical applications using tools like Stan and PyMC, and the significance of smart initialization in MCMC fitting. Teemu's expertise shines as he highlights strategies, including the Pathfinder approach, for navigating complex Bayesian models.

Jun 25, 2025 • 1h 12min
#135 Bayesian Calibration and Model Checking, with Teemu Säilynoja
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Teemu focuses on calibration assessments and predictive checking in Bayesian workflows.Simulation-based calibration (SBC) checks model implementationSBC involves drawing realizations from prior and generating prior predictive data.Visual predictive checking is crucial for assessing model predictions.Prior predictive checks should be done before looking at data.Posterior SBC focuses on the area of parameter space most relevant to the data.Challenges in SBC include inference time.Visualizations complement numerical metrics in Bayesian modeling.Amortized Bayesian inference benefits from SBC for quick posterior checks. The calibration of Bayesian models is more intuitive than Frequentist models.Choosing the right visualization depends on data characteristics.Using multiple visualization methods can reveal different insights.Visualizations should be viewed as models of the data.Goodness of fit tests can enhance visualization accuracy.Uncertainty visualization is crucial but often overlooked.Chapters:09:53 Understanding Simulation-Based Calibration (SBC)15:03 Practical Applications of SBC in Bayesian Modeling22:19 Challenges in Developing Posterior SBC29:41 The Role of SBC in Amortized Bayesian Inference33:47 The Importance of Visual Predictive Checking36:50 Predictive Checking and Model Fitting38:08 The Importance of Visual Checks40:54 Choosing Visualization Types49:06 Visualizations as Models55:02 Uncertainty Visualization in Bayesian Modeling01:00:05 Future Trends in Probabilistic ModelingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli.Links from the show:Teemu's website: https://teemusailynoja.github.io/Teemu on LinkedIn: https://www.linkedin.com/in/teemu-sailynoja/Teemu on GitHub: https://github.com/TeemuSailynojaBayesian Workflow group: https://users.aalto.fi/~ave/group.htmlLBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmittLBS #73 A Guide to Plotting Inferences & Uncertainties of Bayesian Models, with Jessica Hullman: https://learnbayesstats.com/episode/73-guide-plotting-inferences-uncertainties-bayesian-models-jessica-hullmanLBS #66 Uncertainty Visualization & Usable Stats, with Matthew Kay: https://learnbayesstats.com/episode/66-uncertainty-visualization-usable-stats-matthew-kayLBS #35 The Past, Present & Future of BRMS, with Paul Bürkner: https://learnbayesstats.com/episode/35-past-present-future-brms-paul-burknerLBS #29 Model Assessment, Non-Parametric Models, And Much More, with Aki Vehtari: https://learnbayesstats.com/episode/model-assessment-non-parametric-models-aki-vehtariPosterior SBC – Simulation-Based Calibration Checking Conditional on Data: https://arxiv.org/abs/2502.03279Recommendations for visual predictive checks in Bayesian workflow: https://teemusailynoja.github.io/visual-predictive-checks/Simuk, SBC for PyMC: https://simuk.readthedocs.io/en/latest/SBC, tools for model validation in R: https://hyunjimoon.github.io/SBC/index.html New ArviZ, Prior and Posterior predictive checks: https://arviz-devs.github.io/EABM/Chapters/Prior_posterior_predictive_checks.htmlBayesplot, plotting for Bayesian models in R: https://mc-stan.org/bayesplot/TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Jun 19, 2025 • 3min
Live Show Announcement | Come Meet Me in London!
Join a lively discussion about uncertainty quantification in statistical models, focusing on the challenges and realities of building reliable models. Explore why overconfident models can lead to failures in production. Discover useful tools and frameworks that help tackle these issues. Experts will share insights on how we need to rethink our approach to achieve robust machine learning over the next decade. Get ready for an engaging session filled with hard questions and practical wisdom!

Jun 18, 2025 • 15min
BITESIZE | Exploring Dynamic Regression Models, with David Kohns
In this engaging discussion, David Kohns, a researcher at Aalto University specializing in probabilistic programming, shares his insights on the future of Bayesian statistics. He explores the complexities of time series modeling and the significance of setting informative priors. The conversation highlights innovative tools like normalizing flows that streamline Bayesian inference. David also delves into the intricate relationship between AI and prior elicitation, making Bayesian methods more accessible while maintaining the need for practical understanding.

Jun 10, 2025 • 1h 41min
#134 Bayesian Econometrics, State Space Models & Dynamic Regression, with David Kohns
David Kohns, a postdoctoral researcher at Aalto University, enriches the discussion with insights on Bayesian econometrics. He dives into the significance of setting appropriate priors to mitigate overfitting and enhance model performance. Dynamic regression is explored, emphasizing how it captures evolving relationships over time. State-space models are explained as a structured approach in time series analysis, which aids in forecasting and understanding economic dynamics. Kohns also discusses AI's role in prior elicitation, bringing innovative perspectives to statistical modeling.

Jun 4, 2025 • 17min
BITESIZE | Why Your Models Might Be Wrong & How to Fix it, with Sean Pinkney & Adrian Seyboldt
This discussion features Sean Pinkney, an expert in statistical modeling, alongside Adrian Seyboldt. They explore the concept of Zero-Sum Normal in hierarchical models and its implications. The duo dives into the challenges of incorporating new data, distinguishing between population and sample effects, and offers insights into enhancing model accuracy. They also suggest potential automated tools for improved predictions based on population parameters, tackling common statistical modeling challenges along the way.

May 28, 2025 • 1h 12min
#133 Making Models More Efficient & Flexible, with Sean Pinkney & Adrian Seyboldt
Sean Pinkney, a managing director at Omnicom Media Group and Stan contributor, teams up with Adrian Seyboldt, creator of NutBuy, to delve into innovative statistical modeling. They discuss enhancing hierarchical models with zero-sum constraints and the vital differences between population and sample means. Insights on Cholesky parameterization and improved sampling techniques are also explored. Their collaboration emphasizes how sharing knowledge fosters research advancements, making complex statistical problems more approachable and efficient.

May 21, 2025 • 22min
BITESIZE | How AI is Redefining Human Interactions, with Tom Griffiths
In this discussion, Professor Tom Griffiths from Princeton University, an expert in psychology and computer science, shares insights on the interplay between human and artificial intelligence. He highlights key differences in learning processes, emphasizing that AI should enhance human capabilities rather than merely mimic them. Tom addresses how AI can help overcome human biases, improve decision-making, and align better with human cognition. The conversation underscores the need for AI models that reflect human understanding to make more effective systems.

6 snips
May 13, 2025 • 1h 30min
#132 Bayesian Cognition and the Future of Human-AI Interaction, with Tom Griffiths
In this discussion, Tom Griffiths, a Henry Luce professor at Princeton, bridges psychology and computer science. He reveals how Bayesian statistics can enhance our understanding of human cognition and learning. The conversation touches on the importance of individual responses over averages, and how generative AI mirrors human cognitive processes. Griffiths explains the fundamental differences between human and machine intelligence, emphasizing the potential for AI to improve human decision-making while navigating challenges in language learning and alignment.

May 7, 2025 • 14min
BITESIZE | Hacking Bayesian Models for Better Performance, with Luke Bornn
Luke Bornn, a sports analytics expert specializing in generative models, dives into the fascinating world of Bayesian modeling. He discusses how to effectively integrate spatial and temporal data to predict outcomes in sports. The conversation touches on the challenges of creating interpretable priors and optimizing model performance. Luke also shares innovative methods for improving Bayesian models while navigating complexities in computation and posterior sampling. Tune in for insights that blend statistical prowess with sports strategy!