NLP Highlights cover image

117 - Interpreting NLP Model Predictions, with Sameer Singh

NLP Highlights

00:00

The Importance of Explanation in Interpretability Methods

When you perturb text, you don't necessarily get something that's valid or grammatical. So how can we even understand how accurate or like valid the method is if it's changing the text in a way that it produces ungrammatical text? That's one of the key challenges that we are struggling with a lot with these perturbation based techniques. In practice, yes, the inputs might be invalid, but the model's behavior on them is still useful to understand what's going on.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app