There is truth to what you say on some level, universal approximation of neural networks. So even if a human neuron does something really complicated, as far as we can tell, there's no particular reason that that couldn't be approximated to an arbitrary degree of accuracy by a neural net. Maybe instead of calling it universal approximation, it might be better to call it universal simulation. Right. Well, you can literally say, hey, let's imagine a truly gigantic artificial neural network where groups of what over neurons with 100,000 connections between them each corresponds to a biological neuron and now this whole system can simulate a large number of biological neurons. Do you see what I mean?
Read the full transcript here.
Can machines actually be intelligent? What sorts of tasks are narrower or broader than we usually believe? GPT-3 was trained to do a "single" task: predicting the next word in a body of text; so why does it seem to understand so many things? What's the connection between prediction and comprehension? What breakthroughs happened in the last few years that made GPT-3 possible? Will academia be able to stay on the cutting edge of AI research? And if not, then what will its new role be? How can an AI memorize actual training data but also generalize well? Are there any conceptual reasons why we couldn't make AIs increasingly powerful by just scaling up data and computing power indefinitely? What are the broad categories of dangers posed by AIs?
Ilya Sutskever is Co-founder and Chief Scientist of OpenAI, which aims to build artificial general intelligence that benefits all of humanity. He leads research at OpenAI and is one of the architects behind the GPT models. Prior to OpenAI, Ilya was co-inventor of AlexNet and Sequence to Sequence Learning. He earned his Ph.D. in Computer Science from the University of Toronto. Follow him on Twitter at @ilyasut.
Staff
Music
Affiliates