Exploring the potential impact of large language models on the trajectory towards utopia or dystopia, the chapter debates scaling models versus incorporating external memory systems for enhanced reasoning. It also discusses issues such as inaccurate information generation, DEI ideology influence, AI hallucinations, and the concept of technological maturity where AI fulfills all basic needs, prompting a reflection on the value of human activities in such a technologically advanced future.
Nick Bostrom’s previous book, Superintelligence: Paths, Dangers, Strategies, changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong.
But what if things go right?
Bostrom and Shermer discuss: An AI Utopia and Protopia • Trekonomics, post-scarcity economics • the hedonic treadmill and positional wealth values • colonizing the galaxy • The Fermi paradox: Where is everyone? • mind uploading and immortality • Google’s Gemini AI debacle • LLMs, ChatGPT, and beyond • How would we know if an AI system was sentient?
Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world’s most cited philosopher aged 50 or under.