3min chapter

The Inside View cover image

Eric Michaud on scaling, grokking and quantum interpretability

The Inside View

CHAPTER

Groking and Groking in Neural Networks

Grok is this phenomenon where neural networks can generalize long after they first overfit their training data. This was first discovered by some folks at OpenAI. They were training small transform models to learn basic math operations. If they like kept training the network for way longer then it took for the network to overfit eventually the network would generalize.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode