NLP Highlights cover image

137 - Nearest Neighbor Language Modeling and Machine Translation, with Urvashi Khandelwal

NLP Highlights

00:00

Can and New Language Models Improve Perplexity

The first experiment compared the traditional new language model with the interpolated can and new language models on, with the same training data. We still saw huge improvements in perplexity, which is pretty surprising, I think. The main takeaway from this experiment for us was that when the data store contains the same data that the model was trained on, can and LM still results in improved generalization.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app