Linear Digressions cover image

Game Theory for Model Interpretability: Shapley Values

Linear Digressions

00:00

Is Local Accuracy a Good Algorithm?

So what makes something like a good? explanatory algorithm and I'm following right now a paper called a unified approach to interpreting model predictions. They have a they cite in this paper a few different attributes that a good Feature feature importance or feature attribution method should have so The first is what they call local accuracy, which means if you have a simplified model That's approximating a more complex model  if you put the same inputs into the simplified model as You do into the original model you should end up with the same answer. Second is consistency where if a feature is Increasing in its contribution to the outcome then that feature is increasing in importance So again, this is fairly simple But it's the idea

Play episode from 05:44
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app