Learning Bayesian Statistics

Alexandre Andorra
undefined
6 snips
Sep 3, 2025 • 1h 33min

#140 NFL Analytics & Teaching Bayesian Stats, with Ron Yurko

Ron Yurko, an Assistant Teaching Professor and Director of Sports Analytics at Carnegie Mellon University, shares his expertise in Bayesian statistics applied to NFL analytics. He emphasizes the significance of teaching students model-building skills and engaging them in practical projects. The discussion highlights challenges in player performance modeling, the impact of tracking data, and the evolving curriculum in sports analytics education. Ron also advocates for developing a robust sports analytics portfolio to help aspiring analysts thrive in the industry.
undefined
7 snips
Aug 27, 2025 • 25min

BITESIZE | Is Bayesian Optimization the Answer?

In this discussion, Max Balandat, a key figure in Bayesian optimization and an advocate for open-source culture at Meta, shares insights on the integration of BoTorch with PyTorch. He highlights the flexibility and user-friendly nature of GPyTorch for handling optimization challenges with large datasets. Max explores the advantages of using neural networks as feature extractors in high-dimensional Bayesian optimization and emphasizes the importance of open-source collaboration in advancing research in this dynamic field.
undefined
5 snips
Aug 20, 2025 • 1h 25min

#139 Efficient Bayesian Optimization in PyTorch, with Max Balandat

Max Balandat, who leads the modeling and optimization team at Meta, discusses the fascinating world of Bayesian optimization and the BoTorch library. He shares insights on the seamless integration of BoTorch with PyTorch, enhancing flexibility for researchers. The conversation delves into the significance of adaptive experimentation and the impact of LLMs on optimization. Max emphasizes the importance of effectively communicating uncertainty to stakeholders and reflects on the transition from academia to industry, highlighting collaboration in research.
undefined
6 snips
Aug 13, 2025 • 21min

BITESIZE | What's Missing in Bayesian Deep Learning?

Yingzhen Li, a researcher specializing in Bayesian communication and uncertainty in neural networks, teams up with François-Xavier Briol, who focuses on machine learning tools for Bayesian statistics. They dive into the complexities of Bayesian deep learning, emphasizing uncertainty quantification and its role in effective modeling. The discussion covers the evolution of Bayesian models, simulation-based inference methods, and the urgent need for better computational tools to tackle high-dimensional challenges. Their insights on integrating machine learning with Bayesian approaches spark exciting possibilities in the field.
undefined
Aug 6, 2025 • 1h 23min

#138 Quantifying Uncertainty in Bayesian Deep Learning, Live from Imperial College London

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian deep learning is a growing field with many challenges.Current research focuses on applying Bayesian methods to neural networks.Diffusion methods are emerging as a new approach for uncertainty quantification.The integration of machine learning tools into Bayesian models is a key area of research.The complexity of Bayesian neural networks poses significant computational challenges.Future research will focus on improving methods for uncertainty quantification. Generalized Bayesian inference offers a more robust approach to uncertainty.Uncertainty quantification is crucial in fields like medicine and epidemiology.Detecting out-of-distribution examples is essential for model reliability.Exploration-exploitation trade-off is vital in reinforcement learning.Marginal likelihood can be misleading for model selection.The integration of Bayesian methods in LLMs presents unique challenges.Chapters:00:00 Introduction to Bayesian Deep Learning03:12 Panelist Introductions and Backgrounds10:37 Current Research and Challenges in Bayesian Deep Learning18:04 Contrasting Approaches: Bayesian vs. Machine Learning26:09 Tools and Techniques for Bayesian Deep Learning31:18 Innovative Methods in Uncertainty Quantification36:23 Generalized Bayesian Inference and Its Implications41:38 Robust Bayesian Inference and Gaussian Processes44:24 Software Development in Bayesian Statistics46:51 Understanding Uncertainty in Language Models50:03 Hallucinations in Language Models53:48 Bayesian Neural Networks vs Traditional Neural Networks58:00 Challenges with Likelihood Assumptions01:01:22 Practical Applications of Uncertainty Quantification01:04:33 Meta Decision-Making with Uncertainty01:06:50 Exploring Bayesian Priors in Neural Networks01:09:17 Model Complexity and Data Signal01:12:10 Marginal Likelihood and Model Selection01:15:03 Implementing Bayesian Methods in LLMs01:19:21 Out-of-Distribution Detection in LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen.Dr. Mélodie Monod (Imperial College London, School of Public Health)Mélodie completed her PhD as part of the EPSRC Modern Statistics and Statistical Machine Learning program at Imperial College London, transitioned to Novartis as Principal Biostatistician, and is currently a Postdoctoral Researcher in Machine Learning at Imperial. Her research includes diffusion models, Bayesian deep learning, non-parametric Bayesian statistics and pandemic modelling. For more details, see her Google Scholar Publications page.Dr. François-Xavier Briol (University College London, Department of Statistical Science) F-X is Associate Professor in the Department of Statistical Science at University College London, where he leads the Fundamentals of Statistical Machine Learning research group and is co-director of the UCL ELLIS unit. His research focuses on developing statistical and machine learning methods for the sciences and engineering, with his recent work focusing on Bayesian computation and robustness to model misspecification. For more details, see his Google Scholar page.Dr. Yingzhen Li (Imperial College London, Department of Computing)Yingzhen is Associate Professor in Machine Learning at the Department of Computing at Imperial College London, following several years at Microsoft Research Cambridge as senior researcher. Her research focuses on building reliable machine learning systems which can generalise to unseen environments, including topics such as (deep) probabilistic graphical model design, fast and accurate (Bayesian) inference/computation techniques, uncertainty quantification for computation and downstream tasks, and robust and adaptive machine learning systems. For more details, see her Google Scholar Publications page.TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
undefined
Jul 30, 2025 • 25min

BITESIZE | Practical Applications of Causal AI with LLMs, with Robert Ness

Robert Ness, a Microsoft expert in causal assumptions, shares insights on the intersection of causal inference and deep learning. He emphasizes the importance of understanding causal concepts in statistical modeling. The conversation dives into the evolution of probabilistic machine learning and the impact of inductive biases on AI models. Notably, Ness elaborates on how large language models can formalize causal relationships, translating natural language into structured frameworks, making causal analysis more accessible and practical.
undefined
4 snips
Jul 23, 2025 • 1h 38min

#137 Causal AI & Generative Models, with Robert Ness

Robert Ness, a research scientist at Microsoft and faculty at Northeastern University, dives deep into Causal AI. He discusses the critical role of causal assumptions in statistical modeling and how they enhance decision-making processes. The integration of deep learning with causal models is explored, revealing new frontiers in AI. Furthermore, Ness emphasizes the necessity of statistical rigor when evaluating large language models and highlights practical applications and future directions for causal generative modeling in various fields.
undefined
Jul 16, 2025 • 18min

BITESIZE | How to Make Your Models Faster, with Haavard Rue & Janet van Niekerk

Janet van Niekerk, a Bayesian statistician with a PhD focusing on Bayesian inference, joins Haavard Rue to unveil the game-changing Integrated Nasty-Laplace Approximations (INLA) method. They discuss how INLA vastly improves model speed and scalability for large datasets compared to traditional MCMC techniques. The duo dives into the intricacies of latent Gaussian models, their practical applications in fields like global health, and the rapid development of the rinla R package that enhances Bayesian analysis efficiency. Tune in for insights that could transform your statistical modeling!
undefined
15 snips
Jul 9, 2025 • 1h 18min

#136 Bayesian Inference at Scale: Unveiling INLA, with Haavard Rue & Janet van Niekerk

Haavard Rue, a professor and the mastermind behind Integrated Nested Laplace Approximations (INLA), joins Janet van Niekerk, a research scientist specializing in its application to medical statistics. They dive into the advantages of INLA over traditional MCMC methods, highlighting its efficiency with large datasets. The conversation touches on computational challenges, the significance of carefully chosen priors, and the potential of integrating GPUs for future advancements. They also share insights on using INLA for complex models, particularly in healthcare and spatial analysis.
undefined
Jul 4, 2025 • 21min

BITESIZE | Understanding Simulation-Based Calibration, with Teemu Säilynoja

Teemu Säilynoja, an expert in simulation-based calibration and probabilistic programming, shares insights into the vital role of simulation-based calibration (SBC) in model validation. He discusses the challenges of developing SBC methods, focusing on the importance of prior and posterior analyses. The conversation dives into practical applications using tools like Stan and PyMC, and the significance of smart initialization in MCMC fitting. Teemu's expertise shines as he highlights strategies, including the Pathfinder approach, for navigating complex Bayesian models.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app