4min chapter

80,000 Hours Podcast cover image

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

80,000 Hours Podcast

CHAPTER

Is There a Trade-Off Between Scaling and Scaling in Machine Learning?

We just don't get much chance to experience the world because our lives are so short and we die so quickly. So there's actually a bunch of work on scaling laws for large language models which basically say if you have a certain amount of compute to spend, like should you spend it on making the model bigger or should you spending it on training it for longer on more data? And it turns out that from this perspective, if you had as much compute as the human brain uses throughout a human lifetime, then the optimal way to spend that is not having a network the size of the human brain. It's actually having a significantly smaller network and training it on much more data. We're just

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode