
Causal Bandits Podcast Causal Inference & the "Bayesian-Frequentist War" | Richard Hahn S2E8 | CausalBanditsPodcast.com
75 snips
Dec 27, 2025 In this enlightening discussion, Professor Richard Hahn from Arizona State University delves into the ongoing debate between Bayesians and frequentists in statistics. He shares insights on why Bayesian Additive Regression Trees (BART) are effective and how they compare to models like XGBoost. The conversation uncovers the significance of heterogeneous treatment effects and the challenges in generalizing RCT results. Richard emphasizes the importance of realistic simulation studies for understanding causal inference, while coining the term "feature-level selection bias"—a must-listen for stats enthusiasts!
AI Snips
Chapters
Books
Transcript
Episode notes
Bayes Risk As Evaluation Criterion
- Richard Hahn frames Bayesian vs frequentist views as methods of evaluation, not methods themselves.
- Bayes risk averages over problems and data sets, offering tailored performance when you have prior knowledge.
Conformalize Well‑Specified Bayesian Models
- Use conformal inference when you want finite-sample frequentist guarantees around predictions.
- Combine a well-specified Bayesian model with conformalization to get both prior information and frequentist coverage.
Why BART Often Outperforms
- BART consistently 'just works' because it is a sturdy, well‑calibrated ensemble of trees with a clever prior.
- Its MCMC mixes well due to overparameterization, giving reliable conditional mean estimates.

