NLP Highlights cover image

117 - Interpreting NLP Model Predictions, with Sameer Singh

NLP Highlights

00:00

The Trade-Off Between Interpretability and Non-Linear Models

There is no reason to think that an explanation for NLI, even if it's perfect, would be the same form of explanation will work for some kind of reading comprehension or machine translation and so on. The biggest sort of push for interpretability has been when things have become non-linear, where you must have seen in your introductory neural network models. And prediction is something that anybody can understand. Maybe we're not really good with probabilities, but let's keep that aside.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app