Generally Intelligent cover image

Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

Generally Intelligent

00:00

The Problem With Neural Networks

The project was about finding critical points of law services. One leading theory at the time, inspired by an old stat-mec calculation from the 90s, was that there are no bad local minima. So if you could ask arbitrary questions about landscapes, you'd understand arbitrary answers about optimization. The simplest possible statistical physics model of a random surface exhibits this property.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app