NLP Highlights cover image

117 - Interpreting NLP Model Predictions, with Sameer Singh

NLP Highlights

00:00

Influence Functions in Modeling

When you described influence functions, it sounded a whole lot to me like just k-nearest neighbors. Like, can I just find the nearest neighbor of my input? And is that sufficient? How is, what's different here? The main difference from just using your neighbor on the input is to try and understand what the model thinks is the nearest neighbor as opposed to just what your raw embeddings would give it. But also, in some sense, you want to attribute a little bit more to the training process itself or look at the parameters inside the model.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app