4min chapter

AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

CHAPTER

Scaling Laws and Deep Learning

The deep minds chinchilla paper the main interesting results was that everyone was taking models that were too big and training them on two little data. They made a 70 billion parameter model that was about as good as Google Brain's Palm which is 600 billion but with notably less compute. Yes I will counsel that I think parameters are somewhat overrated as a way of gauging model capability. The scaling laws work has been fairly net negative and has been used by people just trying to push the frontier capabilities though I don't have great insight into these questions.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode