Learning Bayesian Statistics cover image

Learning Bayesian Statistics

Latest episodes

undefined
May 25, 2023 • 1h 17min

#83 Multilevel Regression, Post-Stratification & Electoral Dynamics, with Tarmo Jüristo

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOne of the greatest features of this podcast, and my work in general, is that I keep getting surprised. Along the way, I keep learning, and I meet fascinating people, like Tarmo Jüristo.Tarmo is hard to describe. These days, he’s heading an NGO called Salk, in the Baltic state called Estonia. Among other things, they are studying and forecasting elections, which is how we met and ended up collaborating with PyMC Labs, our Bayesian consultancy.But Tarmo is much more than that. Born in 1971 in what was still the Soviet Union, he graduated in finance from Tartu University. He worked in finance and investment banking until the 2009 crisis, when he quit and started a doctorate in… cultural studies. He then went on to write for theater and TV, teaching literature, anthropology and philosophy. An avid world traveler, he also teaches kendo and Brazilian jiu-jitsu.As you’ll hear in the episode, after lots of adventures, he established Salk, and they just used a Bayesian hierarchical model with post-stratification to forecast the results of the 2023 Estonian parliamentary elections and target the campaign efforts to specific demographics.Oh, and let thing: Tarmo is a fan of the show — I told you he was a great guy ;)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh and Grant Pezzolesi.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Tarmo on GitHub: https://github.com/tarmojuristoTarmo on Linkedin: https://www.linkedin.com/in/tarmo-j%C3%BCristo-7018bb7/Tarmo on Twitter: https://twitter.com/tarmojuristoSalk website: https://salk.ee/Hierarchical Bayesian Modeling of Survey Data with Post-stratification: https://www.youtube.com/watch?v=efID35XUQ3IAbstractby Christoph BambergIn episode 83 of the podcast Tarmo Jüristo is our guest. He recently received media attention for his electoral forecasting in the Estonian election and potential positive role in aiding liberal parties gain more votes than expected. Tarmo explains to us how he used Bayesian models with his NGO SALK to forecast the election and how he leveraged these models to unify the different liberal parties that participated in the election. So, we get a firsthand view of how to use Bayesian modelling smartly.Furthermore, we talk about when to use Bayesian models, difficulties in modelling survey data and how post-stratification can help.He also explains how he, with the help of PyMC Labs, added Gaussian Processes to his models to better model the time-series structure of their survey data. We close this episode by discussing the responsibility that comes with modelling data in politics. Transcriptplease note that the transcript was generated automatically and may therefore contain errors. Feel free to reach out if you're willing to correct them.
undefined
May 5, 2023 • 1h 7min

#82 Sequential Monte Carlo & Bayesian Computation Algorithms, with Nicolas Chopin

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with me------------------------------------------------------------------------------Max Kochurov’s State of Bayes Lecture Series: https://www.youtube.com/playlist?list=PL1iMFW7frOOsh5KOcfvKWM12bjh8zs9BQSign up here for upcoming lessons: https://www.meetup.com/pymc-labs-online-meetup/events/293101751/------------------------------------------------------------------------------We talk a lot about different MCMC methods on this podcast, because they are the workhorses of the Bayesian models. But other methods exist to infer the posterior distributions of your models — like Sequential Monte Carlo (SMC) for instance. You’ve never heard of SMC? Well perfect, because Nicolas Chopin is gonna tell you all about it in this episode!A lecturer at the French university of ENSAE since 2006, Nicolas is one of the world experts on SMC. Before that, he graduated from Ecole Polytechnique and… ENSAE, where he did his PhD from 1999 to 2003.Outside of work, Nicolas enjoys spending time with his family, practicing aikido, and reading a lot of books.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady and Kurt TeKolste.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Old episodes relevant to these topics:LBS #14, Hidden Markov Models & Statistical Ecology, with Vianey Leos-Barajas: https://learnbayesstats.com/episode/14-hidden-markov-models-statistical-ecology-with-vianey-leos-barajas/LBS #41, Thinking Bayes, with Allen Downey: https://learnbayesstats.com/episode/41-think-bayes-allen-downey/Nicolas’ show notes:Nicolas on Mastodon: nchopin@mathstodon.xyz2-hour introduction to particle filters: https://www.youtube.com/watch?v=mE_PJ9ASc8YNicolas’ website: https://nchopin.github.io/Nicolas on GitHub: https://github.com/nchopinNicolas on Linkedin: https://www.linkedin.com/in/nicolas-chopin-442a78102/Nicolas’ blog (shared with others): https://statisfaction.wordpress.com/INLA original paper: https://people.bath.ac.uk/man54/SAMBa/ITTs/ITT2/EDF/INLARueetal2009.pdfNicolas’ book, An introduction to Sequential Monte Carlo: https://nchopin.github.io/books.htmlLaplace’s Demon, A Seminar Series about Bayesian Machine Learning at Scale: https://ailab.criteo.com/laplaces-demon-bayesian-machine-learning-at-scale/Paper about Expectation Propagation, Leave Pima Indians Alone – Binary Regression as a Benchmark for Bayesian Computation: https://projecteuclid.org/journals/statistical-science/volume-32/issue-1/Leave-Pima-Indians-Alone--Binary-Regression-as-a-Benchmark/10.1214/16-STS581.fullBlackjax website: https://blackjax-devs.github.io/blackjax/Abstractby Christoph BambergIn episode 82 Nicolas Chopin is our guest. He is a graduate from the Ecole Polytechnique and currently lectures at the French university of ENSAE. He is a specialist for Sequential Monte Carlo (SMC) samplers and explains in detail what they are, clearing up some confusion about what SMC stands for and when to use them. We discuss the advantages of SMC over other types of commonly used samplers for bayesian models such as MCMC or Gibbs samplers. Besides a detailed look at SMC we also cover INLA. INLA stands for Integrated Nested LaPlace Approximation. INLA can be a fast, approximate sampler for specific kinds of models. It works well for geographic data and relationships, such as for example relationships between regions in a country.We discuss the difficulties with and future of SMC and INLA and probabilistic sampling in general.Transcriptplease note that the transcript was generated automatically and may therefore contain errors. Feel free to reach out if you're willing to correct them.
undefined
4 snips
Apr 24, 2023 • 1h 15min

#81 Neuroscience of Perception: Exploring the Brain, with Alan Stocker

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meDid you know that the way your brain perceives speed depends on your priors? And it’s not the same at night? And it’s not the same for everybody?This is another of these episodes I love where we dive into neuroscience, how the brain works, and how it relates to Bayesian stats. It’s actually a follow-up to episode 77, where Pascal Wallisch told us how the famous black and blue dress tells a lot about our priors about how we perceive the world. So I strongly recommend listening to episode 77 first, and then come back here, to have your mind blown away again, this time by Alan Stocker.Alan was born and raised in Switzerland. After a PhD in physics at ETH Zurich, he somehow found himself doing neuroscience, during a postdoc at NYU. And then he never stopped — still leading the Computational Perception and Cognition Laboratory of the University of Pennsylvania.But Alan is also a man of music (playing the piano when he can), a man of coffee (he’ll never refuse an olympia cremina or a kafatek) and a man of the outdoors (he loves trashing through deep powder with his snowboard).Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady and Kurt TeKolste.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Alan’s website: https://www.sas.upenn.edu/~astocker/lab/members-files/alan.phpNoise characteristics and prior expectations in human visual speed perception: https://www.nature.com/articles/nn1669Combining efficient coding with Bayesian inference as a model of human perception:Video: https://vimeo.com/138238753Paper: https://www.nature.com/articles/nn.4105LBS #77 How a Simple Dress Helped Uncover Hidden Prejudices, with Pascal Wallisch: https://learnbayesstats.com/episode/77-how-a-simple-dress-helped-uncover-hidden-prejudices-pascal-wallisch/LBS #72 Why the Universe is so Deliciously Crazy, with Daniel Whiteson: https://learnbayesstats.com/episode/72-why-the-universe-is-so-deliciously-crazy-daniel-whiteson/Abstractby Christoph BambergIn episode 81 of the podcast, Alan Stocker helps us update our priors of how the brain works. Alan, born in Switzerland, studied mechanical engineering and earned his PhD in physics before being introduced to the field of neuroscience through an internship. He is now Associate Professor at the University of Pennsylvania. Our conversation covers various topics related to the human brain and whether it what it does can be characterised as a Bayesian inferences.  Low-level visual processing, such as identifying the orientation of moving grids, can be explained with reference to Bayesian priors and updating under uncertainty. We go through several examples of this such as driving a car in foggy conditions. More abstract cognitive processes, such as reasoning about politics, may be more difficult to explain in Bayesian terms.We also touch upon the question to what degree priors may be innate and how to educate people to change their priors.In the end, Alan gives two recommendations for improving your Bayesian inferences in a political context: 1) Go out and get your own feedback and 2) try to give and receive true feedback. Listen to the episode for details.Transcriptplease note that the transcript was generated automatically and may therefore contain errors. Feel free to reach out if you're willing to correct them.
undefined
Apr 11, 2023 • 1h 9min

#80 Bayesian Additive Regression Trees (BARTs), with Sameer Deshpande

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!I’m sure you know at least one Bart. Maybe you’ve even used one — but you’re not proud of it, because you didn’t know what you were doing. Thankfully, in this episode, we’ll go to the roots of regression trees — oh yeah, that’s what BART stands for. What were you thinking about?Our tree expert will be no one else than Sameer Deshpande. Sameer is an assistant professor of Statistics at the University of Wisconsin-Madison. Prior to that, he completed a postdoc at MIT and earned his Ph.D. in Statistics from UPenn.On the methodological front, he is interested in Bayesian hierarchical modeling, regression trees, model selection, and causal inference. Much of his applied work is motivated by an interest in understanding the long-term health consequences of playing American-style tackle football. He also enjoys modeling sports data and was a finalist in the 2019 NFL Big Data Bowl.Outside of Statistics, he enjoys cooking, making cocktails, and photography — sometimes doing all of those at the same time…Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, and Arkady.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Sameer’s website: https://skdeshpande91.github.io/Sameer on GitHub: https://github.com/skdeshpande91Sameer on Twitter: https://twitter.com/skdeshpande91 Sameer on Google Scholar: https://scholar.google.com/citations?user=coVrnWIAAAAJ&hl=enLBS #50 Ta(l)king Risks & Embracing Uncertainty, with David Spiegelhalter: https://learnbayesstats.com/episode/50-talking-risks-embracing-uncertainty-david-spiegelhalter/LBS #51 Bernoulli’s Fallacy & the Crisis of Modern Science, with Aubrey Clayton: https://learnbayesstats.com/episode/51-bernoullis-fallacy-crisis-modern-science-aubrey-clayton/LBS #58 Bayesian Modeling and Computation, with Osvaldo Martin, Ravin Kumar and Junpeng Lao: https://learnbayesstats.com/episode/58-bayesian-modeling-computation-osvaldo-martin-ravin-kumar-junpeng-lao/Book Bayesian Modeling and Computation in Python: https://bayesiancomputationbook.com/welcome.htmlLBS #39 Survival Models & Biostatistics for Cancer Research, with Jacki Buros: https://learnbayesstats.com/episode/39-survival-models-biostatistics-cancer-research-jacki-buros/Original BART paper (Chipman, George, and McCulloch 2010): https://doi.org/10.1214/09-AOAS285Hill (2011) on BART in causal inference: https://doi.org/10.1198/jcgs.2010.08162Hahn, Murray, and Carvalho on Bayesian causal forests: https://doi.org/10.1214/19-BA1195Main BART package in R: https://cran.r-project.org/web/packages/BART/index.htmldbart R package: https://cran.r-project.org/web/packages/dbarts/index.htmlSameer’s own re-implementation of BART: https://github.com/skdeshpande91/flexBARTAbstractby Christoph BambergIn episode 80, Sameer Deshpande, assistant professor of Statistics at the University of Wisconsin-Madison is our guest. He had a passion for math from a young age. And got into Bayesian statistics at university, teaching statistics now himself. We talk about the intricacies of teaching bayesian statistics, such as helping students accept that there are no objective answers. Sameer’s current work focuses on Bayesian Additive Regression Trees (BARTs). He also works on prior specification, and numerous cool applied projects, for example on the effects of playing American football as an adolescent and its effects for later healthWe primarily talk about BARTs as a way of approximating complex functions by using a collection of step functions. They work off the shelf pretty well and can be applied to various models such as survival models, linear models, and smooth models. BARTs are somewhat analogous to splines and can capture trajectories well over time. However, they are also a bit like a black box making them hard to interpret.We further touch upon some of his work on practical problems, such as how cognitive processes change over time or models of baseball empires’ decision making.TranscriptPlease note that the following transcript was generated automatically and may therefore contain errors. Feel free to reach out if you're willing to correct them.
undefined
Mar 17, 2023 • 1h 8min

#79 Decision-Making & Cost Effectiveness Analysis for Health Economics, with Gianluca Baio

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Decision-making and cost effectiveness analyses rarely get as important as in the health systems — where matters of life and death are not a metaphor. Bayesian statistical modeling is extremely helpful in this field, with its ability to quantify uncertainty, include domain knowledge, and incorporate causal reasoning.Specialized in all these topics, Gianluca Baio was the person to talk to for this episode. He’ll tell us about this kind of models, and how to understand them.Gianluca is currently the head of the department of Statistical Science at University College London. He studied Statistics and Economics at the University of Florence (Italy), and completed a PhD in Applied Statistics, again at the beautiful University of Florence.He’s also a very skilled pizzaiolo — so now I have two reasons to come back to visit Tuscany…Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, and Arkady.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Gianluca’s website: https://gianluca.statistica.it/Gianluca on GitHub: https://github.com/giabaio Gianluca on Mastodon: https://mas.to/@gianlubaioGianluca on Twitter: https://twitter.com/gianlubaioGianluca on Linkedin: https://www.linkedin.com/in/gianluca-baio-b893879/Gianluca’s articles on arXiv: https://arxiv.org/a/baio_g_1.htmlR for Health Technology Assessment (HTA) Consortium: https://r-hta.org/ LBS #50 – Ta(l)king Risks & Embracing Uncertainty, with David Spiegelhalter: https://learnbayesstats.com/episode/50-talking-risks-embracing-uncertainty-david-spiegelhalter/LBS #45 – Biostats & Clinical Trial Design, with Frank Harrell: https://learnbayesstats.com/episode/45-biostats-clinical-trial-design-frank-harrell/How to find priors intuitively= https://www.youtube.com/watch?v=9shZeqKG3M0Hierarchical Bayesian Modeling of Survey Data with Post-stratification: https://www.youtube.com/watch?v=efID35XUQ3ILBS Topical Playlists (also available as RSS feeds on the website): https://www.youtube.com/@learningbayesianstatistics8147/playlistsAbstractby Christoph BambergIn this week’s episode, I talk to Gianluca Baio. He is the head of the department of Statistical Science at University College London and earned a MA and PhD in Florence in Statistics and Economics.His work primarily focuses on Bayesian modeling for decision making in healthcare, for example in case studies for novel drugs and whether this alternative treatment is worth the cost. Being a relatively young field, health economics seems more open to Bayesian statistics than more established fields.While Bayesian statistics becomes more common in clinical trial research, many regulatory bodies still prefer classical p-values. Nonetheless, a lot of COVID modelling was done using Bayesian statistics.We also talk about the purpose of statistics, which is not to prove things but to reduce uncertainty.Gianluca explains that proper communication is important when eliciting priors and involving people in model building. The future of Bayesian statistics is that statistics should have more primacy, and he hopes that statistics will stay central rather than becoming embedded in other approaches like data science, notwithstanding, communication with other disciplines is crucial.TranscriptPlease note that the following transcript was generated automatically and may therefore contain errors. Feel free to reach out if you're willing to correct them.
undefined
7 snips
Mar 1, 2023 • 1h 3min

#78 Exploring MCMC Sampler Algorithms, with Matt D. Hoffman

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Matt Hoffman has already worked on many topics in his life – music information retrieval, speech enhancement, user behavior modeling, social network analysis, astronomy, you name it.Obviously, picking questions for him was hard, so we ended up talking more or less freely — which is one of my favorite types of episodes, to be honest.You’ll hear about the circumstances Matt would advise picking up Bayesian stats, generalized HMC, blocked samplers, why do the samplers he works on have food-based names, etc.In case you don’t know him, Matt is a research scientist at Google. Before that, he did a postdoc in the Columbia Stats department, working with Andrew Gelman, and a Ph.D at Princeton, working with David Blei and Perry Cook.Matt is probably best known for his work in approximate Bayesian inference algorithms, such as stochastic variational inference and the no-U-turn sampler, but he’s also worked on a wide range of applications, and contributed to software such as Stan and TensorFlow Probability.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode and Gabriel Stechschulte.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Matt’s website: http://matthewdhoffman.com/Matt on Google Scholar: https://scholar.google.com/citations?hl=en&user=IeHKeGYAAAAJ&view_op=list_worksThe No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo: https://www.jmlr.org/papers/volume15/hoffman14a/hoffman14a.pdfTuning-Free Generalized Hamiltonian Monte Carlo: https://proceedings.mlr.press/v151/hoffman22a/hoffman22a.pdfNested R-hat: Assessing the convergence of Markov chain Monte Carlo when running many short chain: http://www.stat.columbia.edu/~gelman/research/unpublished/nestedRhat.pdfAutomatic Reparameterisation of Probabilistic Programs: http://proceedings.mlr.press/v119/gorinova20a/gorinova20a.pdfAbstractwritten by Christoph BambergIn this episode, Matt D. Hoffman, a Google research scientist discussed his work on probabilistic sampling algorithms with me. Matt has a background in music information retrieval, speech enhancement, user behavior modeling, social network analysis, and astronomy. He came to machine learning (ML) and computer science through his interest in synthetic music and later took a Bayesian modeling class during his PhD. He mostly works on algorithms, including Markov Chain Monte Carlo (MCMC) methods that can take advantage of hardware acceleration, believing that running many small chains in parallel is better for handling autocorrelation than running a few longer chains. Matt is interested in Bayesian neural networks but is also skeptical about their use in practice. He recently contributed to a generalised Hamilton Monte Carlo (HMC) sampler, and previously worked on an alternative to the No-U-Turn-Sampler (NUTS) called MEADS. We discuss the applications for these samplers and how they differ from one another. In addition, Matt introduces an improved R-hat diagnostic tool, nested R-hat, that he and colleagues developed. Automated TranscriptPlease note that the following transcript was generated automatically and may therefore contain errors. Feel free to reach out if you’re willing to correct them.
undefined
Feb 13, 2023 • 1h 9min

#77 How a Simple Dress Helped Uncover Hidden Prejudices, with Pascal Wallisch

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!I love dresses. Not on me, of course — I’m not nearly elegant enough to pull it off. Nevertheless, to me, dresses are one of the most elegant pieces of clothing ever invented.And I like them even more when they change colors. Well, they don’t really change colors — it’s the way we perceive the colors that can change. You remember that dress that looked black and blue to some people, and white and gold to others? Well that’s exactly what we’ll dive into and explain in this episode.Why do we literally see the world differently? Why does that even happen beyond our consciousness, most of the time? And cherry on the cake: how on Earth could this be related to… priors?? Yes, as in Bayesian priors!Pascal Wallisch will shed light on all these topics in this episode. Pascal is a professor of Psychology and Data Science at New York University, where he studies a diverse range of topics including perception, cognitive diversity, the roots of disagreement and psychopathy.Originally from Germany, Pascal did his undergraduate studies at the Free University of Berlin. He then received his PhD from the University of Chicago, where he studied visual perception.In addition to scientific articles on psychology and neuroscience, he wrote multiple books on scientific computing and data science. As you’ll hear, Pascal is a wonderful science communicator, so it's only normal that he also writes for a general audience at Slate or the Creativity Post, and has given public talks at TedX and Think and Drink.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R and Nicolas Rode.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Pascal’s website: https://blog.pascallisch.net/about/Pascal on Twitter: https://twitter.com/pascallischPascal on Linkedin: https://www.linkedin.com/in/pascal-wallisch-0109b77“Socks & Crocs”, You Are Not So Smart podcast, Episode 200: https://youarenotsosmart.com/2021/02/22/yanss-200-how-a-divisive-photograph-of-a-perceptually-ambiguous-dress-led-two-researchers-to-build-the-nuclear-bomb-of-cognitive-science-out-of-socks-and-crocs/You Are Not So Smart – Live in New York at The Bell House: https://www.youtube.com/watch?v=277HGgqrrUM&t=1sPascal’s paper – Illumination assumptions account for individual differences in the perceptual interpretation of a profoundly ambiguous stimulus in the color domain: https://jov.arvojournals.org/article.aspx?articleid=2617976 Neural Data Science – A Primer with MATLAB and Python: https://www.amazon.com/Neural-Data-Science-MATLAB%C2%AE-PythonTM/dp/0128040432What Color Is The Dress? The Debate That Broke The Internet: https://www.nhpr.org/2015-02-27/what-color-is-the-dress-the-debate-that-broke-the-internet#stream/0The inside story of the ‘white dress, blue dress’ drama that divided a planet: https://www.washingtonpost.com/news/morning-mix/wp/2015/02/27/the-inside-story-of-the-white-dress-blue-dress-drama-that-divided-a-nation/Noise characteristics and prior expectations in human visual speed perception: https://www.nature.com/articles/nn1669Bayesian integration in sensorimotor learning: https://www.nature.com/articles/nature02169Abstractby Christoph BambergIn our conversation, Pascal Wallisch, a professor of Psychology and Data Science at New York University, shared about his research on perception, cognitive diversity, the roots of disagreement, and psychopathy. Pascal did his undergraduate studies at the Free University of Berlin and then received his PhD from the University of Chicago, where he studied visual perception. Pascal is also a TedX, Think and Drink speaker, and writer for Slate and Creativity Post. We discussed Pascal's origin story, his current work on cognitive diversity, and the importance of priors in perception. Pascal used the example of "the Dress" picture that went viral in 2015, where people saw either black and blue or white and gold. He explained how prior experience and knowledge can affect how people perceive colors and motion, and how priors can bias people for action. We discussed to what extent the brain might be Bayesian and what functions are probably not so well described in bayesian terms. Pascal also discussed how priors can be changed through experience and exposure.Finally, Pascal emphasized that people have different priors and perspectives, and that understanding these differences is crucial for creating a more diverse and inclusive society.Automated TranscriptPlease note that the following transcript was generated automatically and may therefore contain errors. Feel free to reach out if you’re willing to correct them.
undefined
Feb 1, 2023 • 1h 11min

#76 The Past, Present & Future of Stan, with Bob Carpenter

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!How does it feel to switch careers and start a postdoc at age 47? How was it to be one of the people who created the probabilistic programming language Stan? What should the Bayesian community focus on in the coming years?These are just a few of the questions I had for my illustrious guest in this episode — Bob Carpenter. Bob is, of course, a Stan developer, and comes from a math background, with an emphasis on logic and computer science theory. He then did his PhD in cognitive and computer sciences, at the University of Edinburgh.He moved from a professor position at Carnegie Mellon to industry research at Bell Labs, to working with Andrew Gelman and Matt Hoffman at Columbia University. Since 2020, he's been working at Flatiron Institute, a non-profit focused on algorithms and software for science.In his free time, Bob loves to cook, see live music, and play role playing games — think Monster of the Week, Blades in Dark, and Fate.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bert≈rand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin and Raphaël R.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Bob’s website: https://bob-carpenter.github.ioBob on GitHub: https://github.com/bob-carpenterBob on Google Scholar: https://scholar.google.com.au/citations?user=kPtKWAwAAAAJ&hl=enStat modeling blog: https://statmodeling.stat.columbia.eduStan home page: https://mc-stan.org/BridgeStan home page: https://github.com/roualdes/bridgestanbayes-infer home page: https://github.com/bob-carpenter/bayes-inferCrowdsourcing with item difficulty: https://github.com/bob-carpenter/rater-difficulty-paperPathfinder VI system: https://www.jmlr.org/papers/v23/21-0889.htmlFlatiron Institute home page: https://www.simonsfoundation.org/flatiron/0 to 100K in 10 years – Nurturing an open-source software community: https://www.youtube.com/watch?v=P9gDFHl-Hss&t=81sInformation Theory, Inference and Learning Algorithms: https://www.amazon.com/Information-Theory-Inference-Learning-Algorithms/dp/0521642981LBS #20 – Regression and Other Stories, with Andrew Gelman, Jennifer Hill & Aki Vehtari: https://learnbayesstats.com/episode/20-regression-and-other-stories-with-andrew-gelman-jennifer-hill-aki-vehtari/LBS #27 – Modeling the US Presidential Elections, with Andrew Gelman & Merlin Heidemanns: https://learnbayesstats.com/episode/27-modeling-the-us-presidential-elections-with-andrew-gelman-merlin-heidemanns/LBS #17 – Reparametrize Your Models Automatically, with Maria Gorinova: https://learnbayesstats.com/episode/17-reparametrize-your-models-automatically-with-maria-gorinova/LBS #36 – Bayesian Non-Parametrics & Developing Turing.jl, with Martin Trapp: https://learnbayesstats.com/episode/36-bayesian-non-parametrics-developing-turing-julia-martin-trapp/LBS #19 – Turing, Julia and Bayes in Economics, with Cameron Pfiffer: https://learnbayesstats.com/episode/19-turing-julia-and-bayes-in-economics-with-cameron-pfiffer/LBS #74 – Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian Seyboldt: https://learnbayesstats.com/episode/74-optimizing-nuts-developing-zerosumnormal-distribution-adrian-seyboldt/Bayesian Workflow paper: https://arxiv.org/abs/2011.01808BAyesian Model-Building Interface (Bambi) in Python: https://bambinos.github.io/bambi/On Being Certain: Believing You Are Right Even When You're Not: https://www.amazon.com/Being-Certain-Believing-Right-Youre/dp/031254152XAbstractby Christoph BambergIn this episode, you meet the man behind the code. Namely, Bob Carpenter, one of the core developers of STAN, a popular statistical programming language. After working in computational linguistic for some time, Bob became a PostDoc with Andrew Gellman to really learn Statistics and Modelling.There he and a small team developed the first implementation of STAN. We talk about the challenges associated with the team growing and the Open Source conventions. Besides the initial intention behind and the beginning of STAN, we talk about the future of probabilistic programming.Creating a tool for people with different degrees of mathematics and programming knowledge is a big challenge and working with these tools may also be more difficult for the user.We discuss why Bayesian statistical programming is popular nonetheless and what makes it uniquely adequate for research.
undefined
Jan 20, 2023 • 1h 7min

#75 The Physics of Top Gun 2 Maverick, with Jason Berndt

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!If you’re a nerd like me, you’re always curious about the physics of any situation. So, obviously, when I watched Top Gun 2, I became fascinated by the aerodynamics of fighters jets. And it so happens that one of my friends used to be a fighter pilot for the Canadian army… Immediately, I thought this would make for a cool episode — and here we are!Actually, Jason Berndt wanted to be a pilot from the age of 3. When he was 6, he went to an air show, and then specifically wanted to become a fighter pilot. In his teens, he learned how to fly saliplanes, small single engine aircrafts. At age 22, he got a bachelor’s in aero engineering from the royal military college, and then — well, he’ll tell you the rest in the episode.Now in his thirties, he owns real estate and created his own company, My Two Brows, selling temporary eyebrow tattoos — which, weirdly enough, is actually related to his time in the army…In his free time, Jason plays the guitar, travels around the world (that’s actually how we met), and loves chasing adrenaline however he can (paragliding, scuba diving, you name it!).Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bert≈rand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin and Raphaël R.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:My Two Brows website: https://mytwobrows.com/My Two Brows on Instagram: https://www.instagram.com/my_two_brows/My Two Brows on YouTube: https://www.youtube.com/channel/UC6eQgQ4qoGE2RStDJkumUGgPyMC Labs Workshop – Hierarchical Bayesian Modeling of Survey Data with Post-stratification: https://www.youtube.com/watch?v=efID35XUQ3IAbstractwritten by Christoph BambergIn this episode of the Learning bayesian statistics podcast we do not talk about Bayesianism, let alone statistics. Instead we dive into the world of fighter jets and Top Gun pilots with Jason Berndt. Jason is a former fighter jet pilot turned entrepreneur. He looks back at his time as a pilot, how he got there, the challenges and thrills of this job and how it influences him now in his new life. We also touch upon physics and science related aspects like G-force, centrifugal power, automation in critical environments like flying a fighter jet and human-computer interaction.Jason discusses the recent movie Top Gun: Maverick and how realistic the flying was as well as the description of the fighter pilots’ lives.
undefined
Jan 5, 2023 • 1h 12min

#74 Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian Seyboldt

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!We need to talk. I had trouble writing this introduction. Not because I didn’t know what to say (that’s hardly ever an issue for me), but because a conversation with Adrian Seyboldt always takes deliciously unexpected turns.Adrian is one of the most brilliant, interesting and open-minded person I know. It turns out he’s courageous too: although he’s not a fan of public speaking, he accepted my invitation on this show — and I’m really glad he did!Adrian studied math and bioinformatics in Germany and now lives in the US, where he enjoys doing maths, baking bread and hiking.We talked about the why and how of his new project, Nutpie, a more efficient implementation of the NUTS sampler in Rust. We also dived deep into the new ZeroSumNormal distribution he created and that’s available from PyMC 4.2 onwards — what is it? Why would you use it? And when?Adrian will also tell us about his favorite type of models, as well as what he currently sees as the biggest hurdles in the Bayesian workflow.Each time I talk with Adrian, I learn a lot and am filled with enthusiasm — and now I hope you will too!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bert≈rand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey and Andreas Kröpelin.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:LBS on Twitter: https://twitter.com/LearnBayesStatsLBS on Linkedin: https://www.linkedin.com/company/learn-bayes-stats/Adrian on GitHub: https://github.com/aseyboldtNutpie repository: https://github.com/pymc-devs/nutpieZeroSumNormal distribution: https://www.pymc.io/projects/docs/en/stable/api/distributions/generated/pymc.ZeroSumNormal.htmlPathfinder – A parallel quasi-Newton algorithm for reaching regions of high probability mass: https://statmodeling.stat.columbia.edu/2021/08/10/pathfinder-a-parallel-quasi-newton-algorithm-for-reaching-regions-of-high-probability-mass/Abstractby Christoph BambergAdrian Seyboldt, the guest of this week’s episode, is an active developer of the PyMC library in Python and his new tool nutpie in Rust. He is also a colleague at PyMC-Labs and friend. So naturally, this episode gets technical and nerdy. We talk about parametrisation, a topic important for anyone trying to implement a Bayesian model and what to do or avoid (don't use the mean of the data!). Adrian explains a new approach to setting categorical parameters, using the Zero Sum Normal Distribution that he developed. The approach is explained in an accessible way with examples, so everyone can understand and implement it themselves.We also talked about further technical topics like initialising a sampler, the use of warm-up samples, mass matrix adaptation and much more. The difference between probability theory and statistics as well as his view on the challenges in Bayesian statistics complete the episode. 

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner