David Chalmers: What would be required for AI to become conscious? Because we don't understand how humans became conscious, it's a very interesting and challenging topic. One of our traditional criteria for consciousness in AI was the Turing test. We don't know whether it's present in language models. There are a few giveaways right now, like when they say, I am a language model from open AI.
The two hottest topics in tech right now are the rise of generative AI and, with Apple’s recent push into spatial computing, the mainstreaming of augmented reality. Will silicon-based machines develop sentience? Will human experience extend into virtual worlds? These distinct technologies may eventually blend to spawn a surprising future, as our “real” world becomes digitally enhanced and our machines behave increasingly like humans.
Today, a provocative discussion with some big (human) thinkers: Steven Johnson, visiting scholar at Google Labs and author of ”Extra Life,” “Where Good Ideas Come from,” and “How We Got to Now”; philosopher and cognitive scientist David Chalmers, author of ”The Conscious Mind” and “Reality+”; and Betaworks founder and AI investor John Borthwick.
• Want to learn more about our executive membership? Email podcast@nextbigideaclub.com
• “David Chalmers Thinks We May Be Living in a Simulation (and He’s OK With It)”
• “Steven Johnson & Michael Specter on the Future of Life”