Why is it better to have loads of layers? The theorem says we don't need them. Exactly. And that's, by the way, not really known. It might simply be that if you have this kind of flat structure, it could in principle learn, but it would take so long to do so. So I think part of it has to do with learnability. But to be fair, and we perhaps come back to this, is one of the reasons why the complexity science of neural nets is interesting - because we don't know what they're capable of.
In our last episode we talked all about intelligence, specifically about what made us intelligent. In this episode we jump into artificial intelligence, and we're joined again by David Krakauer, President and William H. Miller Professor of Complex Systems at the Santa Fe Institute.
This episode was recorded before the release of GPT-4, so David doesn't mention it specifically, but he does take us through the history of artificial intelligence, from Alan Turing, all the way to machine learning and neural networks. And he's going to ask the question: Are we really building something that's intelligent, or are we just building mimics and parrots?
Connect:
This show is produced in collaboration with Wavelength Creative. Visit wavelengthcreative.com for more information.