NLP Highlights cover image

117 - Interpreting NLP Model Predictions, with Sameer Singh

NLP Highlights

00:00

The Importance of Interpretability in Model Design

The third class tries to say, let me bake in interpretability or explanations somehow into my model. So I'm changing my model architecture somehow. There's been a lot of work in this area. We've been talking a lot about the problems with explainability techniques and how explanations fail to capture one thing or the other. These methods sort of start from the focus of maybe the explanations are as important as the prediction itself and sometimes even more.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app