Reflecting on the evolving existential risk conversation around AI since 2017, from Nick Bostrom's Superintelligence to cryptic signatures like p(doom) and e/acc. Exploring ethical AI development, generative AI, mind-reading technology, brain imaging, economic inequality solutions, and future AI research.
The conversation on existential risks from AI has evolved, influenced by Nick Bostrom's Superintelligence and contrasting viewpoints of E-Ack and EA.
Debate on AI risks includes concepts like P-Doom and calls for policymakers to respond to public discourse.
Deep dives
Evolution of Conversation and Public Perception of Existential Risk
In this episode, the host discusses the evolution of conversation and public perception surrounding the existential risks associated with artificial intelligence (AI). He reflects on his book from 2017, Crisis of Control, which explored the potential for AI to either save or destroy humanity. The host notes that the debate on the existential threat of AI dates back further and discusses how public perception and narrative surrounding this topic have changed over time. He also mentions the influence of Nick Bostrom's book, Superintelligence, and the shift in public discourse on AI risks.
Different Perspectives on AI Development
The episode highlights two contrasting viewpoints on the development of AI: Effective Acceleration (E-Ack) and Effective Altruism (EA). E-Ack argues that the benefits of technological progress outweigh the potential harms, including the risks posed by AI, and advocates for rapid development. In contrast, EA focuses on maximizing the overall well-being of humanity and often calls for caution and a pause in AI development. The host discusses how these camps have grown, drawn attention from policymakers, and even influenced funding strategies for organizations like OpenAI.
The Debate on AI's Impact on Human Existence
The episode also touches on the ongoing debate regarding the probability of AI causing human extinction (P-Doom). The host questions the validity of this metric and the varying interpretations people assign to it without a clear timeframe. He argues that the discussion on existential risk is crucial, and policymakers are starting to respond to the increasing public conversation around AI risks. The episode concludes by introducing listener feedback on the potential impact of AI on income inequality and proposing radical ideas to address economic disparities.
Since I published my first book on AI in 2017, the public conversation and perception of the existential risk - risk to our existence - from AI has evolved and broadened. I talk about how that conversation has changed from Nick Bostrom's Superintelligence, the "hard take-off" and what that means, and through to the tossing about of cryptic signatures like p(doom) and e/acc, which I explain and critique.
All this plus our usual look at today's AI headlines.