Generally Intelligent cover image

Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

Generally Intelligent

00:00

Convolutional Networks Do Better Than Fully Connecting Networks on Image Data

In theory you could take a convolutional hernal and then kind of like try it on different types of data. Just looking at competing this kernel and looking at an alignment score would give you a pretty good idea of what? Model class would perform best Now building off that idea there was something that I got interested in for a time around the middle of a PhD. Can we use this as design principle to discover new architectures? That's the method for going from the kernel to network. now Explain to the satisfaction of you in a mathematician Why this was harder to do it has to do with a poor alignment of something called a neural tangent kernel.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app