NLP Highlights cover image

138 - Compositional Generalization in Neural Networks, with Najoung Kim

NLP Highlights

CHAPTER

Pre-Trained Models Are Not Doing So Well on the Lexical Portion of the COGS Task?

It seems based on prior reports that pre-training at least gives you better lexical generalization. And then pre-trained models seem to be doing pretty well on the lexical portion of the COGS task. But this work is about maybe that evaluation setup is confounded because of some properties of the data set and the assumptions that need to be met for this generalization test to work.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner