Generally Intelligent cover image

Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

Generally Intelligent

00:00

The Inductive Bias of Neural Networks

The single layer network is smaller in terms of parameter count. It's an interesting character result would you guess that anything like this applies to other architectures that are not fully connected? I Don't think that Convolutional networks can be collapsed to a single layer like networks can’t But I do think that the deeper idea of reverse engineering kernels is powerful and probably holds across architecture.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app