
#134 Bayesian Econometrics, State Space Models & Dynamic Regression, with David Kohns
Learning Bayesian Statistics
Intro
This chapter features a postdoctoral researcher discussing the ARR squared prior in Bayesian econometrics, specifically within autoregressions. The session includes technical insights and live coding demonstrations related to ARMA and VAR models, serving as a resource for Bayesian statistical modeling enthusiasts.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!
- Intro to Bayes Course (first 2 lessons free)
- Advanced Regression Course (first 2 lessons free)
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!
Visit our Patreon page to unlock exclusive Bayesian swag ;)
Takeaways:
- Setting appropriate priors is crucial to avoid overfitting in models.
- R-squared can be used effectively in Bayesian frameworks for model evaluation.
- Dynamic regression can incorporate time-varying coefficients to capture changing relationships.
- Predictively consistent priors enhance model interpretability and performance.
- Identifiability is a challenge in time series models.
- State space models provide structure compared to Gaussian processes.
- Priors influence the model's ability to explain variance.
- Starting with simple models can reveal interesting dynamics.
- Understanding the relationship between states and variance is key.
- State-space models allow for dynamic analysis of time series data.
- AI can enhance the process of prior elicitation in statistical models.
Chapters:
10:09 Understanding State Space Models
14:53 Predictively Consistent Priors
20:02 Dynamic Regression and AR Models
25:08 Inflation Forecasting
50:49 Understanding Time Series Data and Economic Analysis
57:04 Exploring Dynamic Regression Models
01:05:52 The Role of Priors
01:15:36 Future Trends in Probabilistic Programming
01:20:05 Innovations in Bayesian Model Selection
Thank you to my Patrons for making this episode possible!
Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli.
Links from the show:
- David's website: https://davkoh.github.io/
- David on LinkedIn: https://www.linkedin.com/in/david-kohns-03984013b/
- David on GitHub: https://github.com/davkoh
- David on Google Scholar: https://scholar.google.com/citations?user=9gKE8e4AAAAJ&hl=en
- Dynamic Regression Case Study: https://davkoh.github.io/case-studies/01_dyn_reg/dyn_reg_casestudy5.html
- ARR2 Paper: https://projecteuclid.org/journals/bayesian-analysis/advance-publication/The-ARR2-Prior--Flexible-Predictive-Prior-Definition-for-Bayesian/10.1214/25-BA1512.full
- ARR2 Paper GitHub repository: https://github.com/n-kall/arr2/tree/main
- ARR2 StanCon talk: https://www.youtube.com/watch?v=8XBe2jrOKvw&list=PLCrWEzJgSUqzNzh6mjWsWUu-lSK59VXP6&index=29
- ARR2 Prior in PyMC: https://www.austinrochford.com/posts/r2-priors-pymc.html
- LBS #124 State Space Models & Structural Time Series, with Jesse Grabowski: https://learnbayesstats.com/episode/124-state-space-models-structural-time-series-jesse-grabowski
- LBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmitt
- LBS #74 Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian Seyboldt: https://learnbayesstats.com/episode/74-optimizing-nuts-developing-zerosumnormal-distribution-adrian-seyboldt
- Nutpie’s Normalizing Flows adaptation: https://pymc-devs.github.io/nutpie/nf-adapt.html
Transcript
This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.