Learning Bayesian Statistics

Alexandre Andorra
undefined
Oct 2, 2025 • 1h 10min

#142 Bayesian Trees & Deep Learning for Optimization & Big Data, with Gabriel Stechschulte

Gabriel Stechschulte is a software engineer specializing in Bayesian methods and optimization. He discusses the power of Bayesian Additive Regression Trees (BART) for uncertainty quantification and its re-implementation in Rust, enhancing performance for big data. Gabriel explores how BART contrasts with other models, its strengths in avoiding overfitting, and its integration into optimization frameworks for decision-making. He also emphasizes the importance of open-source communities, encouraging newcomers to contribute actively.
undefined
8 snips
Sep 24, 2025 • 22min

BITESIZE | How Probability Becomes Causality?

In this engaging discussion, Sam Witty, a researcher from the Cairo project, dives into the fascinating world of causal inference. He explains the differences between do-calculus and Cairo’s parametric Bayesian methods, and how regression discontinuity designs enable causal estimation. Sam also explores how Cairo automates the construction of interventions, providing users easy access to complex statistical tools. The talk highlights the significance of efficient estimators, making causal queries more accessible without needing extensive expertise.
undefined
35 snips
Sep 18, 2025 • 1h 38min

#141 AI Assisted Causal Inference, with Sam Witty

In this engaging discussion, Sam Whitty, the founder of Sorbus AI and a pioneer in causal probabilistic programming, dives into the intricacies of causal inference. He explores his journey from engineering to developing ChiRho, a language that merges mechanistic and data-driven models. Listeners will learn about counterfactual reasoning, the significance of modular design, and practical applications in science and engineering. Sam emphasizes the need for collaboration in transforming causal questions into actionable insights, while also looking ahead at the future of causal AI.
undefined
19 snips
Sep 10, 2025 • 24min

BITESIZE | How to Think Causally About Your Models?

In this discussion, Ron Yurko, an expert in sports analytics, shares insights on the complexities of modeling player contributions in soccer and football. He highlights the significance of understanding replacement levels and introduces the Going Deep framework for analyzing player performance. They touch on the challenges of teaching Bayesian modeling, particularly how students struggle with model writing. The conversation underscores the importance of using advanced tracking data for better predictions and the necessity of viewing entire distributions in utility function modeling.
undefined
9 snips
Sep 3, 2025 • 1h 33min

#140 NFL Analytics & Teaching Bayesian Stats, with Ron Yurko

Ron Yurko, an Assistant Teaching Professor and Director of Sports Analytics at Carnegie Mellon University, shares his expertise in Bayesian statistics applied to NFL analytics. He emphasizes the significance of teaching students model-building skills and engaging them in practical projects. The discussion highlights challenges in player performance modeling, the impact of tracking data, and the evolving curriculum in sports analytics education. Ron also advocates for developing a robust sports analytics portfolio to help aspiring analysts thrive in the industry.
undefined
7 snips
Aug 27, 2025 • 25min

BITESIZE | Is Bayesian Optimization the Answer?

In this discussion, Max Balandat, a key figure in Bayesian optimization and an advocate for open-source culture at Meta, shares insights on the integration of BoTorch with PyTorch. He highlights the flexibility and user-friendly nature of GPyTorch for handling optimization challenges with large datasets. Max explores the advantages of using neural networks as feature extractors in high-dimensional Bayesian optimization and emphasizes the importance of open-source collaboration in advancing research in this dynamic field.
undefined
5 snips
Aug 20, 2025 • 1h 25min

#139 Efficient Bayesian Optimization in PyTorch, with Max Balandat

Max Balandat, who leads the modeling and optimization team at Meta, discusses the fascinating world of Bayesian optimization and the BoTorch library. He shares insights on the seamless integration of BoTorch with PyTorch, enhancing flexibility for researchers. The conversation delves into the significance of adaptive experimentation and the impact of LLMs on optimization. Max emphasizes the importance of effectively communicating uncertainty to stakeholders and reflects on the transition from academia to industry, highlighting collaboration in research.
undefined
12 snips
Aug 13, 2025 • 21min

BITESIZE | What's Missing in Bayesian Deep Learning?

Yingzhen Li, a researcher specializing in Bayesian communication and uncertainty in neural networks, teams up with François-Xavier Briol, who focuses on machine learning tools for Bayesian statistics. They dive into the complexities of Bayesian deep learning, emphasizing uncertainty quantification and its role in effective modeling. The discussion covers the evolution of Bayesian models, simulation-based inference methods, and the urgent need for better computational tools to tackle high-dimensional challenges. Their insights on integrating machine learning with Bayesian approaches spark exciting possibilities in the field.
undefined
Aug 6, 2025 • 1h 23min

#138 Quantifying Uncertainty in Bayesian Deep Learning, Live from Imperial College London

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian deep learning is a growing field with many challenges.Current research focuses on applying Bayesian methods to neural networks.Diffusion methods are emerging as a new approach for uncertainty quantification.The integration of machine learning tools into Bayesian models is a key area of research.The complexity of Bayesian neural networks poses significant computational challenges.Future research will focus on improving methods for uncertainty quantification. Generalized Bayesian inference offers a more robust approach to uncertainty.Uncertainty quantification is crucial in fields like medicine and epidemiology.Detecting out-of-distribution examples is essential for model reliability.Exploration-exploitation trade-off is vital in reinforcement learning.Marginal likelihood can be misleading for model selection.The integration of Bayesian methods in LLMs presents unique challenges.Chapters:00:00 Introduction to Bayesian Deep Learning03:12 Panelist Introductions and Backgrounds10:37 Current Research and Challenges in Bayesian Deep Learning18:04 Contrasting Approaches: Bayesian vs. Machine Learning26:09 Tools and Techniques for Bayesian Deep Learning31:18 Innovative Methods in Uncertainty Quantification36:23 Generalized Bayesian Inference and Its Implications41:38 Robust Bayesian Inference and Gaussian Processes44:24 Software Development in Bayesian Statistics46:51 Understanding Uncertainty in Language Models50:03 Hallucinations in Language Models53:48 Bayesian Neural Networks vs Traditional Neural Networks58:00 Challenges with Likelihood Assumptions01:01:22 Practical Applications of Uncertainty Quantification01:04:33 Meta Decision-Making with Uncertainty01:06:50 Exploring Bayesian Priors in Neural Networks01:09:17 Model Complexity and Data Signal01:12:10 Marginal Likelihood and Model Selection01:15:03 Implementing Bayesian Methods in LLMs01:19:21 Out-of-Distribution Detection in LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen.Dr. Mélodie Monod (Imperial College London, School of Public Health)Mélodie completed her PhD as part of the EPSRC Modern Statistics and Statistical Machine Learning program at Imperial College London, transitioned to Novartis as Principal Biostatistician, and is currently a Postdoctoral Researcher in Machine Learning at Imperial. Her research includes diffusion models, Bayesian deep learning, non-parametric Bayesian statistics and pandemic modelling. For more details, see her Google Scholar Publications page.Dr. François-Xavier Briol (University College London, Department of Statistical Science) F-X is Associate Professor in the Department of Statistical Science at University College London, where he leads the Fundamentals of Statistical Machine Learning research group and is co-director of the UCL ELLIS unit. His research focuses on developing statistical and machine learning methods for the sciences and engineering, with his recent work focusing on Bayesian computation and robustness to model misspecification. For more details, see his Google Scholar page.Dr. Yingzhen Li (Imperial College London, Department of Computing)Yingzhen is Associate Professor in Machine Learning at the Department of Computing at Imperial College London, following several years at Microsoft Research Cambridge as senior researcher. Her research focuses on building reliable machine learning systems which can generalise to unseen environments, including topics such as (deep) probabilistic graphical model design, fast and accurate (Bayesian) inference/computation techniques, uncertainty quantification for computation and downstream tasks, and robust and adaptive machine learning systems. For more details, see her Google Scholar Publications page.TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
undefined
Jul 30, 2025 • 25min

BITESIZE | Practical Applications of Causal AI with LLMs, with Robert Ness

Robert Ness, a Microsoft expert in causal assumptions, shares insights on the intersection of causal inference and deep learning. He emphasizes the importance of understanding causal concepts in statistical modeling. The conversation dives into the evolution of probabilistic machine learning and the impact of inductive biases on AI models. Notably, Ness elaborates on how large language models can formalize causal relationships, translating natural language into structured frameworks, making causal analysis more accessible and practical.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app