2min chapter

LessWrong (Curated & Popular) cover image

"Two-year update on my personal AI timelines" by Ajeya Cotra

LessWrong (Curated & Popular)

CHAPTER

Is There Evidence for Efficient Meta Learning?

I haven't yet seen evidence that language models can be taught new skills they definitely didn't know already over the course of many rounds of back and forth. I've also seen efficient zero cited as evidence that SGD itself can reach human level sample complexities without the need for explicit meta learning. But this doesn't seem right to me. The relevant comparison is how much time it would take a human to get what's going on and what they're supposed to aim for. And the latter is not something that it'd take a human two hours of watching to figure out. Probably more like 15 to 60 seconds.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode