The chapter delves into the potential risks associated with achieving superhuman AI intelligence, discussing existential threats to humanity. It explores differing perspectives on AI development and emphasizes the importance of aligning AI systems with human values to prevent catastrophic outcomes. The conversation also touches on the challenges of maintaining control over rapidly advancing AI technology and the potential societal impacts of AI competition.
Nick Bostrom’s previous book, Superintelligence: Paths, Dangers, Strategies, changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong.
But what if things go right?
Bostrom and Shermer discuss: An AI Utopia and Protopia • Trekonomics, post-scarcity economics • the hedonic treadmill and positional wealth values • colonizing the galaxy • The Fermi paradox: Where is everyone? • mind uploading and immortality • Google’s Gemini AI debacle • LLMs, ChatGPT, and beyond • How would we know if an AI system was sentient?
Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world’s most cited philosopher aged 50 or under.