
Learning Bayesian Statistics
Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is?
Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow.
When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible.
So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped.
But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners!
My name is Alex Andorra by the way, and I live in Estonia. By day, I'm a data scientist and modeler at the https://www.pymc-labs.io/ (PyMC Labs) consultancy. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages https://docs.pymc.io/ (PyMC) and https://arviz-devs.github.io/arviz/ (ArviZ). I also love https://www.pollsposition.com/ (election forecasting) and, most importantly, Nutella. But I don't like talking about it – I prefer eating it.
So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and https://www.patreon.com/learnbayesstats (unlock exclusive Bayesian swag on Patreon)!
Latest episodes

Dec 23, 2022 • 1h 1min
#73 A Guide to Plotting Inferences & Uncertainties of Bayesian Models, with Jessica Hullman
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!I’m guessing you already tried to communicate the results of a statistical model to non-stats people — it’s hard, right? I’ll be honest: sometimes, I even prefer to take notes during meetings than doing that… But shhh, that’s out secret.But all of this was before. Before I talked with Jessica Hullman. Jessica is the Ginny Rometty associate professor of computer science at Northwestern University.Her work revolves around how to design interfaces to help people draw inductive inferences from data. Her research has explored how to best align data-driven interfaces and representations of uncertainty with human reasoning capabilities, which is what we’ll mainly talk about in this episode.Jessica also tries to understand the role of interactive analysis across different stages of a statistical workflow, and how to evaluate data visualization interfaces.Her work has been awarded with multiple best paper and honorable mention awards, and she frequently speaks and blogs on topics related to visualization and reasoning about uncertainty — as usual, you’ll find the links in the show notes.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bert≈rand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox and Trey Causey.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)General links from the show:Jessica’s website: http://users.eecs.northwestern.edu/~jhullman/ Jessica on Twitter: https://twitter.com/JessicaHullmanMidwest Uncertainty Collective: https://mucollective.northwestern.edu/Jessica’s posts on Andrew Gelman’s blog: https://statmodeling.stat.columbia.edu/Jessica’s posts on Medium: https://medium.com/multiple-views-visualization-research-explainedLBS # 66, Uncertainty Visualization & Usable Stats, with Matthew Kay: https://learnbayesstats.com/episode/66-uncertainty-visualization-usable-stats-matthew-kay/Some of Jessica’s research that she mentioned:A Bayesian Model of Cognition to Improve Data Visualization: https://mucollective.northwestern.edu/files/2019-BayesianVis-CHI.pdfVisual Reasoning Strategies for Effect Size Judgments and Decisions: https://mucollective.northwestern.edu/files/2020%20-%20Kale,%20Visual%20Reasoning%20Strategies%20for%20Effect%20Size%20Judgements.pdfHypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data: https://mucollective.northwestern.edu/files/2018-HOPsTrends-InfoVis.pdfBehavioral economics paper Jessica mentioned:A Model of Non-belief in the Law of Large Numbers: https://scholar.harvard.edu/files/rabin/files/barney2014.pdfMore on David Blackwell:Summary of his career: https://stat.illinois.edu/news/2020-07-17/david-h-blackwell-profile-inspiration-and-perseveranceHis original work on Blackwell ordering: https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-24/issue-2/Equivalent-Comparisons-of-Experiments/10.1214/aoms/1177729032.pdfLectures on day 5 of this workshop covered his work on approachability: https://old.simons.berkeley.edu/workshops/schedule/16924Abstract:by Christoph BambergProfessor Jessica Hullman from Northwestern University is an expert in designing visualisations that help people learn from data and not fall prey to biases.She focuses on the proper communication of uncertainty, both theoretically and empirically.She addresses questions like “Can a Bayesian model of reasoning explain apparently biased reasoning?”, “What kind of visualisation guides readers best to a valid inference?”, “How can biased reasoning be so prevalent - are there scenarios where not following the canonical reasoning steps is optimal?”.In this episode we talk about her experimental studies on communication of uncertainty through visualisation, in what scenarios it may not be optimal to focus too much on uncertainty and how we can design models of reasoning that can explain actual behaviour and not discard it as biased.

Dec 3, 2022 • 1h 14min
#72 Why the Universe is so Deliciously Crazy, with Daniel Whiteson
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!What happens inside a black hole? Can we travel back in time? Why is the Universe even here? This is the type of chill questions that we’re all asking ourselves from time to time — you know, when we’re sitting on the beach.This is also the kind of questions Daniel Whiteson loves to talk about in his podcast, “Daniel and Jorge Explain the Universe”, co-hosted with Jorge Cham, the author of PhD comics. Honestly, it’s one of my favorite shows ever, so I warmly recommend it. Actually, if you’ve ever hung out with me in person, there is a high chance I started nerding out about it…Daniel is, of course, a professor of physics, at the University of California, Irvine, and also a researcher at CERN, using the Large Hadron Collider to search for exotic new particles — yes, these are particles that put little umbrellas in their drinks and taste like coconut.On his free time, Daniel loves reading, sailing and baking — I can confirm that he makes a killer Nutella roll!Oh, I almost forgot: Daniel and Jorge wrote two books — We Have No Idea and FAQ about the Universe — which, again, I strongly recommend. They are among my all-time favorites.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bert≈rand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek and Paul Cox.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:PyMC Labs Meetup, Dec 8th 2022, A Candle in the Dark – How to Use Hierarchical Post-Stratification with Noisy Data: https://www.meetup.com/pymc-labs-online-meetup/events/289949398/Daniel’s website: https://sites.uci.edu/daniel/Daniel on Twitter: https://twitter.com/DanielWhiteson“Daniel and Jorge Explain the Universe”: https://sites.uci.edu/danielandjorge/?pname=danielandjorge.com&sc=dnsredirectWe Have No Idea – A Guide To The Unknown Universe: https://phdcomics.com/noidea/Frequently Asked Questions About The Universe: https://sites.uci.edu/universefaq/Learning to Identify Semi-Visible Jets: https://arxiv.org/abs/2208.10062Twitter thread about the paper above: https://twitter.com/DanielWhiteson/status/1561929005653057536Abstractby Christoph BambergBig questions are tackled in episode 72 of the Learning Bayesian Statistics Podcast: “What is the nature of the universe?”, “What is the role of science?”, “How are findings in physics created and communicated?”, “What is randomness actually?”. This episode’s guest, Daniel Whitesun, is just the right person to address these questions.He is well-known for his own podcast “Daniel and Jorge Explain the Universe”, wrote several popular science books on physics and works as a particle physicist with data from the particle physics laboratory CERN.He manages to make sense of Astrology, although he is not much of a star-gazer himself. Daniel prefers to look for weird stuff in the data of colliding particles and ask unexpected questions.This comes with great statistical challenges that he tackles with Bayesian statistics and machine learning, while he also subscribes to the frequentist philosophy of statistics.In the episode, Alex and Daniel touch upon many of the great ideas in quantum physics, the Higgs boson, Schrödinger’s cat, John Bell’s quantum entanglement discoveries, true random processes and much more. Mixed in throughout are pieces of advice for anyone scientifically-minded and curious about the universe.

Nov 14, 2022 • 1h 5min
#71 Artificial Intelligence, Deepmind & Social Change, with Julien Cornebise
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!This episode will show you different sides of the tech world. The one where you research and apply algorithms, where you get super excited about image recognition and AI-generated art. And the one where you support social change actors — aka the “AI for Good” movement.My guest for this episode is, quite naturally, Julien Cornebise. Julien is an Honorary Associate Professor at UCL. He was an early researcher at DeepMind where he designed its early algorithms. He then worked as a Director of Research at ElementAI, where he built and led the London office and “AI for Good” unit.After his theoretical work on Bayesian methods, he had the privilege to work with the NHS to diagnose eye diseases; with Amnesty International to quantify abuse on Twitter and find destroyed villages in Darfur; with Forensic Architecture to identify teargas canisters used against civilians.Other than that, Julien is an avid reader, and loves dark humor and picking up his son from school at the 'hour of the daddies and the mommies”.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek and Paul Cox.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Julien’s website: https://cornebise.com/julien/Julien on Twitter: https://twitter.com/JCornebiseJulien on LinkedIn: https://www.linkedin.com/in/juliencornebise/ Julien on Scholar: https://scholar.google.co.uk/citations?user=6fkVVz4AAAAJ&hl=en&oi=aoStable Diffusion is a really big deal: https://simonwillison.net/2022/Aug/29/stable-diffusion/LBS #21, Gaussian Processes, Bayesian Neural Nets & SIR Models, with Elizaveta Semenova: https://learnbayesstats.com/episode/21-gaussian-processes-bayesian-neural-nets-sir-models-with-elizaveta-semenova/pymc.find_constrained_prior function: https://www.pymc.io/projects/docs/en/stable/api/generated/pymc.find_constrained_prior.html#pymc.find_constrained_priorLBS #50, Ta(l)king Risks & Embracing Uncertainty, with David Spiegelhalter: https://learnbayesstats.com/episode/50-talking-risks-embracing-uncertainty-david-spiegelhalter/LBS #67 Exoplanets, Cool Worlds & Life in the Universe, with David Kipping: https://learnbayesstats.com/episode/67-exoplanets-cool-worlds-life-in-universe-david-kipping/Abstractby Christoph BambergJulien Cornebise goes on a deep dive into deep learning with us in episode 71. He calls himself a “passionate, impact-driven scientist in Machine Learning and Artificial Intelligence”. He holds an Honorary Associate Professor position at UCL, was an early researcher at DeepMind, went on to become Director of Research at ElementAI and worked with institutions ranging from the NHS in Great-Britain to Amnesty International. He is a strong advocate for using Artificial Intelligence and computer engineering tools for good and cautions us to think carefully about who we develop models and tools for. Ask the question: What could go wrong? How could this be misused? The list of projects where he used his computing skills for good is long and divers: With the NHS he developed methods to measure and diagnose eye diseases. For Amnesty International he helped quantify the abuse female journalists receive on Twitter, based on a database of tweets labeled by volunteers. Beyond these applied projects, Julien and Alex muse about the future of structured models in times of more and more popular deep learning approaches and the fascinating potential of these new approaches. He advices anyone interested in these topics to be comfortable with experimenting by themselves and potentially breaking things in a non-consequential environment. And don’t be too intimidated by more seasoned professionals, he adds, because they probably have imposter-syndrome themselves which is a sign of being aware of ones own limitations. Automated TranscriptPlease note that the following transcript was generated automatically and may therefore contain errors. Feel free to reach out if you’re willing to correct them.

Oct 22, 2022 • 1h 6min
#70 Teaching Bayes for Biology & Biological Engineering, with Justin Bois
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Back in 2016, when I started dedicating my evenings and weekends to learning how to code and do serious stats, I was a bit lost… Where do I start? Which language do I pick? Why are all those languages just named with one single letter??Then I found some stats classes by Justin Bois — and it was a tremendous help to get out of that wood (and yes, this was a pun). I really loved Justin’s teaching because he was making the assumptions explicit, and also explained them — which was so much more satisfying to my nerdy brain, which always wonders why we’re making this assumption and not that one.So of course, I’m thrilled to be hosting Justin on the show today! Justin is a Teaching Professor in the Division of Biology and Biological Engineering at Caltech, California, where he also did his PhD. Before that, he was a postdoc in biochemistry at UCLA, as well as the Max Planck Institute in Dresden, Germany.Most importantly for the football fans, he’s a goalkeeper — actually, the day before recording, he saved two penalty kicks… and even scored a goal! A big fan of Los Angeles football club, Justin is a also a magic enthusiast — he is indeed a member of the Magic Castle…Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Matthew McAnear, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken and Or Duek.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Justin’s website: http://bois.caltech.edu/index.html Justin on GitHub: https://github.com/justinbois/Justin’s course on Data analysis with frequentist inference: https://bebi103a.github.io/Justin’s course on Bayesian inference: https://bebi103b.github.io/LBS #6, A principled Bayesian workflow, with Michael Betancourt: https://learnbayesstats.com/episode/6-a-principled-bayesian-workflow-with-michael-betancourt/Physical Biology of the Cell: https://www.routledge.com/Physical-Biology-of-the-Cell/Phillips-Kondev-Theriot-Garcia-Phillips-Kondev-Theriot-Garcia/p/book/9780815344506Knowledge Illusion – Why We Never Think Alone: https://www.amazon.fr/Knowledge-Illusion-Never-Think-Alone/dp/039918435XTheSustainable Energy – Without the Hot Air: https://www.amazon.com/Sustainable-Energy-Without-Hot-Air/dp/0954452933Information Theory, Inference and Learning Algorithms: https://www.amazon.com/Information-Theory-Inference-Learning-Algorithms/dp/0521642981AbstractBy Christoph BambergJustin Bois did his Bachelor and PhD in Chemical Engineering before working as a Postdoctoral Researcher in Biological Physics, Chemistry and Biological Engineering. He now works as a Teaching Professor at the division of Biology and Biological Engineering at Caltech, USA. He first got into Bayesian Statistics like many scientists in fields like biology or psychology, by wanting to understand what the statistics actually mean that he was using. His central question was “what is parameter estimation actually?”. After all, that’s a lot of what doing quantitative science is on a daily basis! The Bayesian framework allowed him to find an answer and made him feel like a more complete scientist. As a teaching professor, he is now helping students of life sciences such as neuroscience or biological engineering to become true Bayesians. His teaching covers what you need to become a proficient Bayesian analyst, from opening datasets to Bayesian inference. He emphasizes the importance of models implicit in quantitative research and shows that we do in most cases have a prior idea of an estimand’s magnitude. Justin believes that we are naturally programmed to think in a Bayesian framework but still should mess up sometimes to learn that statistical techniques are fragile. You can find some of his teaching on his website.TranscriptThis transcript was generated automatically. Some transcription errors may have remained. Feel free to reach out if you're willing to correct them.[00:00:00] In 2016, when I started dedicating my evenings and weekends to learning how to code and do serious stats, I was a bit lost, to be honest. Where do I start? Which language do I speak? Why are all those languages just named with one single letter, like R or C? Then I found some stats classes by just in voice.And it was a tremendous help to get out of that wood. And yes, this was a pun. I really enjoyed Justine's teaching because he was making the assumptions explicit, and he also explained them, which was so much more satisfying to my minority brain, which always wonders why we're making this assumption and not that one.So of course, I'm thrilled to be hosting Justin on the show today. Justin is a teaching professor in the division of biology and biological engineering at Caltech, California, where he also did his PhD. Before that, he was a postdoc in biochemistry at UCLA as well as the Max Plan Institute in Tris, Germany.Most importantly, for the football fans, Justin is a goalkeeper. [00:01:00] Actually, the day before recording, he saved two penalty, penalty, kicks, and even scored a goal. Yes, a big fan of Los Angeles's football club. Justine is also a magic enthusiast. He is indeed a member of the Magic Castle. This is Learning Patient Statistics.Ex episode 70, recorded September 2nd, 2022. Welcome to Learning Patient Statistics, a fortnightly podcast on Beijing Inference, The methods project in the People who Make Impossible. I'm your host, Alex Andora. You can follow me Twitter at ann underscore like the country. For any info about the podcast, learn base stats.com is lap less to be Show notes becoming corporate sponsor supporting lbs and Pat.Unlocking base merge, everything is in there. That's learn base dance.com. If with all that info, a model is still resisting you, or if you find my voice special, smooth and [00:02:00] want me to come and teach patient stats in company, then reach out at alex.andorra@pymc-labs.io or book call with me at learnbayesstats.com.Thanks a lot folks. And best patient wish shes to you old. Let me show you how to be a good bla and change your predictions after taking information and, and if you're thinking they'll be less than amazing, let's adjust those expectations. What's a basian is someone who cares about evidence and doesn't jump to assumptions based on intuitions and prejudice.Abassian makes predictions on the best available info and adjusts the probability cuz every belief is provisional. And when I kick a flow, mostly I'm watching eyes widen. Maybe cuz my likeness lowers expectations of tight ryman. How would I know unless I'm Ryman in front of a bunch of blind men, drop in placebo controlled science like I'm Richard Feinman, just in boys.Welcome to Learning Patient St Sticks. Thank you. Happy to be here. Yes. Very [00:03:00] happy to have you here because, well, you know that, but listeners do not. But you are actually one of the first people who introduced me back to, uh, statistics and programming in 2017 when I started my Carrie Shift. So it's awesome to have you here today.I'm glad my stuff helped you get going. That's, that's the point. That's the goal. Yeah. Yeah, that's really cool. And also, I'm happy to have learned how you pronounce your last name because in French, you know, that's a French name. I dunno if you have some French origin, but in French it means, I know, I know it's a French name, but it's actually, as far as I understand, my family's from Northern Germany and there's a, a name there that's spelled b e u s s, like, and it's pronounced like in Germany, you say Boce.And then it got anglicized, I think when I moved to the US but uh, I was actually recently, just this past summer in Luanne, Switzerland, and there was a giant wood recycling bin. With my name on it, , it said d i s. So I got my picture taken next to that. So yeah. Yeah. Lo Zen is in the French speaking part of Switzerland.[00:04:00] That's right. Cool. So we're starting already with the origin story, so I love that cuz it's actually always my first question. So how did you jump to the stats in biology worlds and like how Senior of a Pass read it? Well, I think the path that I had toward really thinking carefully about statistical inferences is a very common path among scientists, meaning scientists outside of data scientists and, and maybe also outside of really data rich branches of sciences such as astronomy.So I studied chemical engineering as an undergraduate. It was a standard program. I didn't really do any undergrad research or anything, but I got into a little bit of statistics when I had a job at Kraft Foods. After undergraduate where I worked at the statistician on doing some predictive modeling about, uh, some food safety issues.And I thought it was interesting, but I sort of just, I was an engineer. I was making the product, I was implementing the stuff in the production facility and the statistician kind of took care of [00:05:00] everything else. I thought, I thought he was one of the coolest people in the company, . Um, but I didn't really, you know, it didn't really hook me in to really thinking about that.But I went and did a PhD and my PhD really didn't involve really much experimentation at all. I was actually doing computational modeling of like how nucleic acids get their structure and shape and things. And that was, it just didn't really involve analysis of much data. Then in my post-doctoral studies, in my post-doctoral work, I was working with some experimentalists who had some data sets and they needed.do estimates of parameters based on some theoretical models that I had derived or worked on. And I had done some stuff and you know, various lab classes and stuff, but it's your standard thing. It's like, ooh, I know how to do a cur fit. Meaning I can, I guess in the Python way I would do it, SciPi dot optimized dot cur fit.Or you know, in MATLAB I could do at least squares or something like that. And, and I knew this idea of minimizing the sum of the square of the residuals and that's gonna get you [00:06:00] a line that looks close to what your data points are. But the inference problems, the theoretical curves were actually a little bit say for some of 'em.There was no close to form solution. They were actually solutions to differential equations. And so the actual theoretical treatment I had was a little bit more complicated. And so I needed to start to think a little bit more carefully about exactly how we're going about estimating the parameters thereof.Right? And so I kind of just started grabbing uh, books and I. Discovered quickly that I had no idea what I was doing, , and actually neither did anybody around me. And I don't mean that pejoratively, it's just, it's a very common thing among the scient. A lot of people in the sciences that aren't, that don't work as much with data.And perhaps it's less common now, but it's definitely more common than, you know, 10, 15, uh, years ago. And so I just kind of started looking into how we should actually think about the estimates of [00:07:00] parameters given a data set. And really what happened was the problem became crystallized for me, the problem of parameter estimation.And I had never actually heard that phrase, perimeter estimation. To me. It was find the best fit per. If your curve goes through your data point, that means that you're, the theory that you derived is probably pretty good. And of course, I didn't think about what the word probably meant there. I, I only knew it colloquially, right?And so, cuz I was focused on deriving what the theory is. And of course that's a whole, hugely important part of, of the scientific enterprise. But once you get that theory arrived to try to estimate the parameters of that are present in that theory from measurement, that problem just became clear to me.Once I had a clear problem statement, then I was able to start to think about how to solve it. And so the problem statement was, I have a theory that has a set of parameters. I want to try to figure out what the parameters are by taking [00:08:00] some measurements and checking for one set of parameters. The measurements would be different.How do I find what parameters there are to, to give me this type, type of data that I observe. I intentionally just stated that awkwardly because that awkwardness there sort of made the, It's funny, it made it clear to me that the problem was unclear . And, and so I, that's what got me into a basian mode of thinking because it was hard for me to wrap my head around what it meant to do that thing that I've been doing all this time.This minimizing some squares of residuals and trying to find the best fit parameter. And, you know, in retrospect now I've actually, you know, that I taught myself. Cause I didn't really ever have a course in statistical inference or anything like that, say Okay. I was essentially doing a maximum likelihood estimation, which is a f way of doing prime destination.And I, and I hadn't actually thought about what that meant. I mean, I understand that now. We don't really need to talk [00:09:00] about that since we're talking about BA stuff now, but, and it was just harder for me to wrap my head around what that meant. And so I started reading. About the basing interpretation of probability, and it was really, it really just crystallized everything and made it clear, and then I could state the problem much more clearly.The problem was I was trying to find a posterior probability density function for these parameters given the data, and that was just so much clearly stated in Baying framework, and then that kinda lit me on fire because I was like, Holy cow, this thing that we do so often in the scientific enterprise, I can actually state the question , right?And I just thought that was such a profound moment, and then I was kind of hooked from there on out and I, I was concent trying to improve how I thought about these things. And yeah, so I did a lot of reading. I realized I just talked a lot. You probably have [00:10:00] some questions about some of the stuff I just said, so please.Oh yeah, well wait. But, um, I mean, that's good to have a, an overview like that. And so I guess that's also like, it sounds like you were introduced to patient statistics at the same time as you were doing that deep dive into, wait, like, I'm not sure I understand what I'm using then. Oh, actually I don't understand anything and then I have to learn about that.But it seems that you, you were also introduced to patient stats at that same time, Is that right? Yeah, I think so. And I think this is actually sort of a classic way in which scientists come up with what it is that they want to study. Because instead you start poking around, you kind of don't really know where the holes in your knowledge are.And so what I saw was like just a giant hole in my knowledge and my toolbox, and I saw the hole and I said, All right, let's fill it . And um, and so then I just started feeling around on how to do that. I see. And I am also curious as [00:11:00] to, and what motivated you to dive into the Beijing way of doing things?I really do think it was the clarity. I think that, Okay. I think that arguing about like what interpretation or probability you wanna use is not the most fruitful way to spend one's time. For me, it was really, it was just so much more intuitive. I felt like I could have this interpretation of probability that it's, it's a quantification of the plausibility of a logical conjecture of any logical conjecture gave me sort of the flexibility where I could think about like a...

Oct 5, 2022 • 54min
#69 Why, When & How to use Bayes Factors, with Jorge Tendeiro
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!A great franchise comes with a great rivalry: Marvel has Iron Man and Captain America; physics has General Relativity and Quantum Physics; and Bayesian stats has Posterior Estimation and… Bayes Factors!A few months ago, I had the pleasure of hosting EJ Wagenmakers, to talk about these topics. This time, I’m talking with Jorge Tendeiro, who has a different perspective on Null Hypothesis Testing in the Bayesian framework, and its relationship with generative models and posterior estimation.But this is not your classic, click-baity podcast, and I’m not interested in pitching people against each other. Instead, you’ll hear Jorge talk about the other perspective fairly, before even giving his take on the topic. Jorge will also tell us about the difficulty of arguing through papers, and all the nuances you lose compared to casual discussions.But who is Jorge Tendeiro? He is a professor at Hiroshima University in Japan, and he was recommended to me by Pablo Bernabeu, a listener of this very podcast.Before moving to Japan, Jorge studied math and applied stats at the University of Porto, and did his PhD in the Netherlands. He focuses on item response theory (specifically person fit analysis), and, of course, Bayesian statistics, mostly Bayes factors.He’s also passionate about privacy issues in the 21st century, an avid Linux user since 2006, and is trying to get the hang of the Japanese language.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Matthew McAnear, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas and Robert Yolken.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Jorge’s website: https://www.jorgetendeiro.com/Jorge on Twitter: https://twitter.com/jntendeiroJorge on GitHub: https://github.com/jorgetendeiroA Review of Issues About Null Hypothesis Bayesian Testing: https://pure.rug.nl/ws/portalfiles/portal/159021509/2019_26880_001.pdfAdvantages Masquerading as ‘Issues’ in Bayesian Hypothesis Testing – A Commentary on Tendeiro and Kiers: https://psyarxiv.com/nf7rpOn the white, the black, and the many shades of gray in between – Our reply to van Ravenzwaaij and Wagenmakers: https://psyarxiv.com/tjxvz/LBS #61, Why we still use non-Bayesian methods, with EJ Wagenmakers: https://learnbayesstats.com/episode/61-why-we-still-use-non-bayesian-methods-ej-wagenmakers/LBS #67, Exoplanets, Cool Worlds & Life in the Universe, with David Kipping: https://learnbayesstats.com/episode/67-exoplanets-cool-worlds-life-in-universe-david-kipping/

Sep 14, 2022 • 1h 6min
#68 Probabilistic Machine Learning & Generative Models, with Kevin Murphy
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Hosting someone like Kevin Murphy on your podcast is… complicated. Not because Kevin himself is complicated (he’s delightful, don’t make me say what I didn’t say!), but because all the questions I had for him amounted to a 12-hour show.Sooooo, brace yourselves folks!No, I'm kidding. Of course I didn’t do that folks, Kevin has a life! This life started in Ireland, where he was born. He grew up in England and got his BA from the University of Cambridge. After his PhD at UC Berkeley, he did a postdoc at MIT, and was an associate professor of computer science and statistics at the University of British Columbia in Vancouver, Canada, from 2004 to 2012. After getting tenure, he went to Google in California in 2011 on his sabbatical and then ended up staying. He currently runs a team of about 8 researchers inside of Google Brain working on generative models, optimization, and other, as Kevin puts it, “basic” research topics in AI/ML. He has published over 125 papers in refereed conferences and journals, as well 3 textbooks on machine learning published in 2012, 2022 and the last one coming in 2023. You may be familiar with his 2012 book, as it was awarded the DeGroot Prize for best book in the field of statistical science.Outside of work, Kevin enjoys traveling, outdoor sports (especially tennis, snowboarding and scuba diving), as well as reading, cooking, and spending time with his family.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Matthew McAnear, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Lin Yu Sha, Scott Anthony Robson, David Haas and Robert Yolken.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Kevin’s website: https://www.cs.ubc.ca/~murphyk/Kevin on Twitter: https://mobile.twitter.com/sirbayesKevin’s books (free pdf) on GitHub (includes a link to places where you can buy the hard copy): https://probml.github.io/pml-book/Book that inspired Kevin to get into AI: https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/dp/0465026567State-space models library in JAX (WIP): https://github.com/probml/ssm-jaxOther software for the book (also in JAX): https://github.com/probml/pyprobmlFun photo of Kevin’s hacked-up wearable camera system from 20 years ago: https://www.cs.ubc.ca/~murphyk/Vision/placeRecognition.html

Aug 31, 2022 • 1h 1min
#67 Exoplanets, Cool Worlds & Life in the Universe, with David Kipping
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Is there life in the Universe? It doesn’t get deeper than this, does it? And yet, why do we care about that? In the very small chance that there is other life in the Universe, we have even less chance to discover it, talk to it and meet it. So, why do we care?Well, it may surprise you but Bayesian statistics helps us think about these astronomical and — dare I say? — philosophical topics, as my guest, David Kipping, will brilliantly explain in this episode.David is an Associate Professor of Astronomy at Columbia University, where he leads the Cool Worlds Lab — I know, the name is awesome. His team’s research spans exoplanet discovery and characterization, the search for life in the Universe and developing novel approaches to our exploration of the cosmos.David also teaches astrostatistics, and his contributions to Bayesian statistics span astrobiology to exoplanet detection. He also hosts the Cool Worlds YouTube channel, with over half a million subscribers, that discusses his team’s work and broader topics within the field.Cool worlds, cool guest, cool episode.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Matthew McAnear, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Lin Yu Sha, Scott Anthony Robson, David Haas and Robert Yolken.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:David’s website: http://user.astro.columbia.edu/~dkipping/David on Twitter: https://twitter.com/david_kippingDavid’s YouTube channel: https://www.youtube.com/c/coolworldslabDavid’s research group: https://www.coolworldslab.com/Bayesian analysis of the astrobiological implications of life’s early emergence on Earth : https://www.pnas.org/doi/10.1073/pnas.1111694108We Have No Idea – A Guide to the Unknown Universe : https://www.goodreads.com/book/show/31625636-we-have-no-ideaLeonardo da Vinci’s biography by Walter Isaacson: https://www.amazon.com/Leonardo-Vinci-Walter-Isaacson/dp/1501139169/ref=sr_1_1?keywords=leonardo+da+vinci+book&qid=1660142880&sprefix=leonardo+%2Caps%2C219&sr=8-1

10 snips
Aug 17, 2022 • 1h 2min
#66 Uncertainty Visualization & Usable Stats, with Matthew Kay
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!I have to confess something: I love challenges. And when you’re a podcaster, what’s a better challenge than dedicating an episode to… visualization? Impossible you say? Well, challenge accepted!Thankfully, I got the help of a visualization Avenger for this episode — namely, Matthew Kay. Matt is an Assistant Professor jointly appointed in Computer Science and Communications Studies at Northwestern University, where he co-directs the Midwest Uncertainty Collective — I know, it’s a pretty cool name for a lab.He works in human-computer interaction and information visualization, and especially in uncertainty visualization. He also builds tools to support uncertainty visualization in R. In particular, he’s the author of the tidybayes and ggdist R packages, and wrote the random variable interface in the posterior package.I promise, you won’t be uncertain about the importance of uncertainty visualization after that…Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Matthew McAnear, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Lin Yu Sha and Scott Anthony Robson.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Matt on Twitter: https://twitter.com/mjskayMatt on GitHub: https://github.com/mjskay Matt’s website: https://www.mjskay.com/ Midwest Uncertainty Collective lab: https://mucollective.northwestern.edu/ PyMC find_constrained_priors tutorial: https://www.youtube.com/watch?v=9shZeqKG3M0PyMC find_constrained_priors doc: https://www.pymc.io/projects/docs/en/latest/api/generated/pymc.find_constrained_prior.htmlTutorials / package documentation / videos:tidybayes: http://mjskay.github.io/tidybayes/ ggdist: https://mjskay.github.io/ggdist/ (various visualizations in the slabinterval vignette: https://mjskay.github.io/ggdist/articles/slabinterval.html ) Miscellaneous uncertainty visualizations examples: https://github.com/mjskay/uncertainty-examples Talk on uncertainty visualization: https://www.youtube.com/watch?v=E1kSnWvqCw0 Biases in probability perception:A survey paper on the linear-in-log-odds model of probability perception: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3261445/Using the linear-in-log-odds model to "debias" uncertainty visualization: https://osf.io/6xcnw/

Aug 3, 2022 • 1h 5min
#65 PyMC, Aeppl, & Aesara: the new cool kids on the block, with Ricardo Vieira
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Folks, there are some new cool kids on the block. They are called PyMC, Aeppl, and Aesara, and it’s high time we give us a proper welcome!To do that, who better than one of the architects of the new PyMC 4.0 — Ricardo Vieira! In this episode, he’ll walk us through the inner workings of the newly released version of PyMC, telling us why the Aesara backend and the brand new RandomVariable operators constitute such strong foundations for your beloved PyMC models. He will also tell us about a self-contained PPL project called Aeppl, dedicated to converting model graphs to probability functions — pretty cool, right?Oh, in case you didn’t guess yet, Ricardo is a PyMC developer and data scientist at PyMC Labs. He spent several years teaching himself Statistics and Computer Science at the expense of his official degrees in Psychology and Neuroscience.So, get ready for efficient random generator functions, better probability evaluation functions, and a fully-fledged modern Bayesian workflow!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Matthew McAnear, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Lin Yu Sha and Scott Anthony Robson.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Ricardo on Twitter: https://twitter.com/RicardoV944Ricardo on GitHub: https://github.com/ricardoV94/Ricardo’s website: https://ricardov94.github.io/posts/PyMC, Aesara and Aeppl: The New Kids on The Block (YouTube video): https://www.youtube.com/watch?v=_APNiXTfYJwBayesian Vector Autoregression in PyMC: https://www.pymc-labs.io/blog-posts/bayesian-vector-autoregression/New PyMC website: https://www.pymc.io/projects/docs/en/stable/learn.htmlDefine, optimize, and evaluate mathematical expressions with Aesara: https://aesara.readthedocs.io/en/latest/Aeppl documentation: https://aeppl.readthedocs.io/en/latest/PyMC’s YouTube channel: https://www.youtube.com/c/PyMCDevelopersPyMC on Twitter: https://twitter.com/pymc_devsPyMC on LinkedIn: https://www.linkedin.com/company/pymc/mycompany/LBS #61, Why we still use non-Bayesian methods, with EJ Wagenmakers: https://www.learnbayesstats.com/episode/61-why-we-still-use-non-bayesian-methods-ej-wagenmakersLBS #31, Bayesian Cognitive Modeling & Decision-Making, with Michael Lee: https://www.learnbayesstats.com/episode/31-bayesian-cognitive-modeling-michael-leeWe Have No Idea, A Guide to the Unknown Universe: https://www.amazon.com/We-Have-No-Idea-Universe/dp/0735211515

15 snips
Jul 20, 2022 • 1h 7min
#64 Modeling the Climate & Gravity Waves, with Laura Mansfield
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!I’m sure you’ve already heard of gravitational waves, because my listeners are the coolest and smartest ever ;) But did you know about gravity waves? That’s right, waves in the sky due to gravity — sounds awesome, right?Well, I’m pretty sure that Laura Mansfield will confirm your prior. Currently a postdoc at Stanford University, Laura studies — guess what? — gravity waves and how they are represented in climate models. In particular, she uses Bayesian methods to estimate the uncertainty on the gravity wave components of the models.Holding a PhD from the University of Reading in the UK, her background is in atmospheric physics, but she’s interested in climate change and environmental issues.So seat back, chill out, and enjoy this physics-packed, aerial episode!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Matthew McAnear, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Lin Yu Sha and Scott Anthony Robson.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Laura on Twitter: https://twitter.com/lau_mansfieldLaura’s webpage: https://profiles.stanford.edu/laura-mansfieldJulia package for Gaussian Processes: https://github.com/STOR-i/GaussianProcesses.jl Julia implementation of the scikit-learn API: https://github.com/cstjean/ScikitLearn.jlDerivative-free Bayesian optimization techniques based on Ensemble Kalman Filters: https://github.com/CliMA/EnsembleKalmanProcesses.jl
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.