5min chapter

Generally Intelligent cover image

Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

Generally Intelligent

CHAPTER

The Limits of Neural Tangent Kernels in Deep Learning

The theory of machine learning algorithms is often divided into two main facets. There's questions of sort of training and dynamics things like optimization behavior and conversion slide in this camp And the other side is generalization which says independent of how you got to the final solution How well does it do how well does it generalize from your training data to your test data? So I think people who point out its limitations are correct in noting limitations but at the same time, I think there's a surprising amount we can learn just from studying kernels. We were not super aware that there's an enormous body of literature on this that I've since gone back and integrated. The thing that we noticed fairly quickly was that

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode