Speaker 2
In our last episode, we talked all about intelligence, specifically what made us intelligent. In this episode, we jump into artificial intelligence. We're joined again by David Cracker, President and William H. Miller Professor of Complex Systems at the Santa Fe institution. This episode was recorded before the release of GPT-4, so David doesn't specifically talk about it, but he does take us through the history of artificial intelligence. From Alan Turing, all the way to machine learning and neural networks. And he asks, are we really building something that's intelligent? Or are we just building mimics and parrots? This is Simplifying Complexity, a podcast where we explore the underlying principles of complex systems, systems that seem to defy our rational view of the world, like economies, ecologies, or even you or me. I'm forensic engineer Sean Brady and I'll be your host. Welcome David back on the show and welcome to part two of our discussion on intelligence. So the last episode was all about the evolution of intelligence. This is all about the automation of intelligence. And in the last episode, we talked about representation and cognitive inference and strategy. And we ended, but briefly giving a teaser about Alan Turing.
Speaker 1
And I believe that's where you want to begin, David. The work of Alan Turing, you could pick other people, but for me, he's the sort of pivotal figure here. Because in an interesting way, he connects directly to a history of reasoning about intelligence and mind and brain. In 1936, Brady rates a canonical paper where he introduces what we now understand as functions that can't be computed using what we now call Turing machines. In the process of developing that, he basically made the distinction between hardware and software. This was a key point. It was the first engineering analog to mind and brain. Then later in 1948, 1950s, Alan Turing started thinking about computers that could mimic human intelligence. Turing was very upset that no one could agree on a definition of intelligence, the kind of things we talked about yesterday. He thought that will never be resolved. So here's what I'm going to do. I'm going to be brutally positivistic. And I'm going to say, this is what it is. If you can convince me that you are, then you are. As simple as that. And he called that the imitation game. And we'll sit down and have a conversation. And if I ask you a whole series of difficult questions, what's your favorite fruit? What's your favorite joke? What happens when you fall over backwards and hit your head? And I'll just keep quizzing you. And then I'll determine, right? Okay. That style of reasoning generated a kind of backlash. It was, you know, it's very easy to fool a human being. We're constantly falling for visual illusions. We're constantly being conned. Maybe Alan Turing just came up with the ultimate con man. And we'll get back to this at the end, right, when we talk about neural networks that pass very elaborate Turing tests. So that for me is the start. I think at this point in our history, the obvious dominant paradigm is machine learning using deep neural networks. So take those starting in 43. It just, those simple units to stop connecting to more and more other simple units. And as computer memory and computer processing power increases, those things grow. And it's quite literally that simple. There was a period sort of between the 80s and 2000s where that program stalled. Now, it's interesting. So there was this movement called parallel distributed programming. And this was really pushed by cognitive scientists, most notably by people like Rumor Hart and McClelland, along with Jeff Hinton, the back prop algorithm. It doesn't matter. It's just a technique for essentially doing gradient descent to train weights in a neural network. He was shown to be able to solve very simple classes of problems, but nothing on a human scale, nothing that would take your breath away. You say, wow, that's intelligent agent. And then Jeff Hinton sort of made a discovery, which is that they hadn't been ambitious enough in two different ways. One is they hadn't given it enough data to train, right? That was a time of course when the internet existed, the world wide web existed. It was now possible to get all these tagged datasets so as to train these in a supervised fashion. And in 20, I don't know, in around 2012 or something, they participated in a competition called the ImageNet Competition where artificial intelligence systems are asked to classify planar natural images, sort of 2D images. And this thing blew everything out of the water. It just blew everything out of the water. It wasn't an incremental improvement, it was a massive improvement. And that moment transformed, it moved from what some people call these sort of AI winter to the AI spring. This has been recurrent throughout its history where all of a sudden people thought, oh my God, those things that we had cast aside as being mere computational curiosities can actually solve very hard tasks vastly better than the other systems that we've been using, which for example, in that case, use much more neuroscientific knowledge, much more knowledge of objects and visual pathways.