Learning Bayesian Statistics cover image

#80 Bayesian Additive Regression Trees (BARTs), with Sameer Deshpande

Learning Bayesian Statistics

00:00

Bart's Non-Parametric Regression Model

The idea of Bart is a bit that, right? It's like finding the most fundamental particles in which you can disintegrate the model and then trying to build a model based on adding all of those particles together. Hugh and Rob really came up with a way in that 2010 paper to make this work well. And so it makes it really easy for practitioners who are like, I just need to run a regression. They don't have to do a ton of hyper parameter tuning. There's not a lot of internal cross-validation to get the model to run just right. The availability of good default settings that work well across problems is why I think it's found so much use in applications

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app