
117 - Interpreting NLP Model Predictions, with Sameer Singh
NLP Highlights
00:00
The Pathological Nature of Modeling
The paper that introduced this used input reduction on squad, the Stanford question answering data set and SNLI. It showed that with very small or very large reductions in the input, the model's prediction stayed the same. And so we think our models perhaps are doing complex grammatical. They need to actually understand the grammar of English. But at some level, at least when you force them to make simple predictions, these methods seem to show that they're not actually leveraging much of the grammar ofEnglish at all.
Transcript
Play full episode