NLP Highlights cover image

138 - Compositional Generalization in Neural Networks, with Najoung Kim

NLP Highlights

00:00

Pre-Trained Models - What's the Second Setup?

The results showed that making the substitution does degrade the performance compared to what's been reported in the literature. So this is actually about like 15 to 20% point degradation across different character sampling strategiesthat we tested. And I think this does suggest that the reported results in the literature have been overestimated to some degree for not having controlled for this lexical compound, he says.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app