NLP Highlights cover image

117 - Interpreting NLP Model Predictions, with Sameer Singh

NLP Highlights

00:00

The Problems With Linear Models

I feel like even a linear model, if you have overlapping features in any way, then you could get correlations that are hard to interpret. I think it sort of all depends on how many features are going into your linear model. People sometimes define features by running a different model and taking its output and creating a feature. And in some sense, that's what neural networks do. They have this nonlinear transformation and then you have a linear layer.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app