AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Complexity Science of Neural Nets
Why is it better to have loads of layers? The theorem says we don't need them. Exactly. And that's, by the way, not really known. It might simply be that if you have this kind of flat structure, it could in principle learn, but it would take so long to do so. So I think part of it has to do with learnability. But to be fair, and we perhaps come back to this, is one of the reasons why the complexity science of neural nets is interesting - because we don't know what they're capable of.