Learning Bayesian Statistics cover image

#80 Bayesian Additive Regression Trees (BARTs), with Sameer Deshpande

Learning Bayesian Statistics

00:00

Bard's for Non-Parametric Regression

Bard originally got it start for non-parametric regression with Gaussian errors. With Bard, you're getting the full probabilistic model. So to do Bard for classification, it's easiest to do it as like a probate regression. But then once you do that, you can start to do things like heteroscedastic regression where not only does the mean function change with X, but your sigma changes with X. These types of things get to do a lot of work in like designing like loss functions and because you're no longer working with a full Probabilistic model, it becomes harder to chase uncertainties.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app