NLP Highlights cover image

138 - Compositional Generalization in Neural Networks, with Najoung Kim

NLP Highlights

00:00

Pre-Trained Models Are Not Doing So Well on the Lexical Portion of the COGS Task?

It seems based on prior reports that pre-training at least gives you better lexical generalization. And then pre-trained models seem to be doing pretty well on the lexical portion of the COGS task. But this work is about maybe that evaluation setup is confounded because of some properties of the data set and the assumptions that need to be met for this generalization test to work.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app