Generally Intelligent cover image

Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

Generally Intelligent

00:00

The Limits of Neural Tangent Kernels in Deep Learning

The theory of machine learning algorithms is often divided into two main facets. There's questions of sort of training and dynamics things like optimization behavior and conversion slide in this camp And the other side is generalization which says independent of how you got to the final solution How well does it do how well does it generalize from your training data to your test data? So I think people who point out its limitations are correct in noting limitations but at the same time, I think there's a surprising amount we can learn just from studying kernels. We were not super aware that there's an enormous body of literature on this that I've since gone back and integrated. The thing that we noticed fairly quickly was that

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app