3min chapter

The Gradient: Perspectives on AI cover image

Sewon Min: The Science of Natural Language

The Gradient: Perspectives on AI

CHAPTER

The Meta-Training of Language Models

It seems like model saturation is something that we tend to see a lot. And so it's interesting, but I guess not entirely surprising. The meta trained model ignores the input label mapping more than regular language models. It just helps the model to better locate the task that is already learned during training. But it perhaps seems to teach the model entirely new ability.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode