NLP Highlights cover image

117 - Interpreting NLP Model Predictions, with Sameer Singh

NLP Highlights

00:00

The Importance of Influence Functions in NLP

So basically we're talking about two gradient steps here. And so you're computing a Hessian over your entire training data, which if you have a lot of training data, like say, Bert pre-training data, this could be a nightmare. They have an approximation that try to get around this. Some of them will work on also and they work with some x degrees. Okay. This sounds like an interesting direction. I get the feeling from what you said that this is still pretty early in its application, especially in NLP, but a really interesting potential avenue for a bunch of interesting work. Yes. So do you.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app