NLP Highlights cover image

117 - Interpreting NLP Model Predictions, with Sameer Singh

NLP Highlights

00:00

How to Interpret a Predictive Model

The feature attribution method uses a loss function that we compute gradients on. The gradient of the output would be due to respect to the input and see which part of the input have the highest gradient. And mathematically, what that means is if that input was changed slightly, the prediction would change a lot.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app