Game Theory for Model Interpretability: Shapley Values
Linear Digressions
00:00
A Sharply Value Is Just Averaging Every Single Contribution That an Actor or Player or Person Could Have
It's just averaging Every single possible contribution that the actor or player or person could have In every single situation. And then and then you can kind of imagine Okay, if you add this person to some arbitrary situation they're going to have an average impact of whatever. So now instead of composing a team of soccer team of players Instead all of our players Uh, the the outcome that we're interested in is not how many soccer points did you score? The outcome that we are interested in is what's a prediction that a machine learning model is making on an individual case and the players instead of being soccer players are all of the features that could be going into Making that prediction.
Play episode from 20:35
Transcript


