

Learning Bayesian Statistics
Alexandre Andorra
Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow. When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible. So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners! My name is Alex Andorra by the way. By day, I'm a Senior data scientist. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages PyMC and ArviZ. I also love Nutella, but I don't like talking about it – I prefer eating it. So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and unlock exclusive Bayesian swag on Patreon!
Episodes
Mentioned books

Jul 30, 2025 • 25min
BITESIZE | Practical Applications of Causal AI with LLMs, with Robert Ness
Robert Ness, a Microsoft expert in causal assumptions, shares insights on the intersection of causal inference and deep learning. He emphasizes the importance of understanding causal concepts in statistical modeling. The conversation dives into the evolution of probabilistic machine learning and the impact of inductive biases on AI models. Notably, Ness elaborates on how large language models can formalize causal relationships, translating natural language into structured frameworks, making causal analysis more accessible and practical.

8 snips
Jul 23, 2025 • 1h 38min
#137 Causal AI & Generative Models, with Robert Ness
Robert Ness, a research scientist at Microsoft and faculty at Northeastern University, dives deep into Causal AI. He discusses the critical role of causal assumptions in statistical modeling and how they enhance decision-making processes. The integration of deep learning with causal models is explored, revealing new frontiers in AI. Furthermore, Ness emphasizes the necessity of statistical rigor when evaluating large language models and highlights practical applications and future directions for causal generative modeling in various fields.

Jul 16, 2025 • 18min
BITESIZE | How to Make Your Models Faster, with Haavard Rue & Janet van Niekerk
Janet van Niekerk, a Bayesian statistician with a PhD focusing on Bayesian inference, joins Haavard Rue to unveil the game-changing Integrated Nasty-Laplace Approximations (INLA) method. They discuss how INLA vastly improves model speed and scalability for large datasets compared to traditional MCMC techniques. The duo dives into the intricacies of latent Gaussian models, their practical applications in fields like global health, and the rapid development of the rinla R package that enhances Bayesian analysis efficiency. Tune in for insights that could transform your statistical modeling!

15 snips
Jul 9, 2025 • 1h 18min
#136 Bayesian Inference at Scale: Unveiling INLA, with Haavard Rue & Janet van Niekerk
Haavard Rue, a professor and the mastermind behind Integrated Nested Laplace Approximations (INLA), joins Janet van Niekerk, a research scientist specializing in its application to medical statistics. They dive into the advantages of INLA over traditional MCMC methods, highlighting its efficiency with large datasets. The conversation touches on computational challenges, the significance of carefully chosen priors, and the potential of integrating GPUs for future advancements. They also share insights on using INLA for complex models, particularly in healthcare and spatial analysis.

Jul 4, 2025 • 21min
BITESIZE | Understanding Simulation-Based Calibration, with Teemu Säilynoja
Teemu Säilynoja, an expert in simulation-based calibration and probabilistic programming, shares insights into the vital role of simulation-based calibration (SBC) in model validation. He discusses the challenges of developing SBC methods, focusing on the importance of prior and posterior analyses. The conversation dives into practical applications using tools like Stan and PyMC, and the significance of smart initialization in MCMC fitting. Teemu's expertise shines as he highlights strategies, including the Pathfinder approach, for navigating complex Bayesian models.

Jun 25, 2025 • 1h 12min
#135 Bayesian Calibration and Model Checking, with Teemu Säilynoja
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Teemu focuses on calibration assessments and predictive checking in Bayesian workflows.Simulation-based calibration (SBC) checks model implementationSBC involves drawing realizations from prior and generating prior predictive data.Visual predictive checking is crucial for assessing model predictions.Prior predictive checks should be done before looking at data.Posterior SBC focuses on the area of parameter space most relevant to the data.Challenges in SBC include inference time.Visualizations complement numerical metrics in Bayesian modeling.Amortized Bayesian inference benefits from SBC for quick posterior checks. The calibration of Bayesian models is more intuitive than Frequentist models.Choosing the right visualization depends on data characteristics.Using multiple visualization methods can reveal different insights.Visualizations should be viewed as models of the data.Goodness of fit tests can enhance visualization accuracy.Uncertainty visualization is crucial but often overlooked.Chapters:09:53 Understanding Simulation-Based Calibration (SBC)15:03 Practical Applications of SBC in Bayesian Modeling22:19 Challenges in Developing Posterior SBC29:41 The Role of SBC in Amortized Bayesian Inference33:47 The Importance of Visual Predictive Checking36:50 Predictive Checking and Model Fitting38:08 The Importance of Visual Checks40:54 Choosing Visualization Types49:06 Visualizations as Models55:02 Uncertainty Visualization in Bayesian Modeling01:00:05 Future Trends in Probabilistic ModelingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand...

Jun 19, 2025 • 3min
Live Show Announcement | Come Meet Me in London!
Join a lively discussion about uncertainty quantification in statistical models, focusing on the challenges and realities of building reliable models. Explore why overconfident models can lead to failures in production. Discover useful tools and frameworks that help tackle these issues. Experts will share insights on how we need to rethink our approach to achieve robust machine learning over the next decade. Get ready for an engaging session filled with hard questions and practical wisdom!

Jun 18, 2025 • 15min
BITESIZE | Exploring Dynamic Regression Models, with David Kohns
In this engaging discussion, David Kohns, a researcher at Aalto University specializing in probabilistic programming, shares his insights on the future of Bayesian statistics. He explores the complexities of time series modeling and the significance of setting informative priors. The conversation highlights innovative tools like normalizing flows that streamline Bayesian inference. David also delves into the intricate relationship between AI and prior elicitation, making Bayesian methods more accessible while maintaining the need for practical understanding.

Jun 10, 2025 • 1h 41min
#134 Bayesian Econometrics, State Space Models & Dynamic Regression, with David Kohns
David Kohns, a postdoctoral researcher at Aalto University, enriches the discussion with insights on Bayesian econometrics. He dives into the significance of setting appropriate priors to mitigate overfitting and enhance model performance. Dynamic regression is explored, emphasizing how it captures evolving relationships over time. State-space models are explained as a structured approach in time series analysis, which aids in forecasting and understanding economic dynamics. Kohns also discusses AI's role in prior elicitation, bringing innovative perspectives to statistical modeling.

Jun 4, 2025 • 17min
BITESIZE | Why Your Models Might Be Wrong & How to Fix it, with Sean Pinkney & Adrian Seyboldt
This discussion features Sean Pinkney, an expert in statistical modeling, alongside Adrian Seyboldt. They explore the concept of Zero-Sum Normal in hierarchical models and its implications. The duo dives into the challenges of incorporating new data, distinguishing between population and sample effects, and offers insights into enhancing model accuracy. They also suggest potential automated tools for improved predictions based on population parameters, tackling common statistical modeling challenges along the way.


