

AI Will Kill Us All — Liron Shapira on The Flares
Dec 27, 2024
In this thought-provoking discussion, Liron Shapira, a prominent AI risk advocate, engages with Gaëtan Selle about the existential threats posed by artificial intelligence. They dissect the crossroads of effective altruism and transhumanism while pondering the chilling notion of a potential AI apocalypse. Delving into Bayesian epistemology, Shapira examines how uncertainty shapes our understanding of AI risks. The conversation takes a fascinating turn as they explore cryonics, simulation theories, and the quest for alignment between AI and human values.
AI Snips
Chapters
Books
Transcript
Episode notes
Early AI Risk Interest
- Liron Shapira became interested in AI risk in 2007 after discovering Less Wrong and the MIRI community.
- He became a fan of Eliezer Yudkowsky and the potential dangers of uncontrolled AI.
General Intelligence Factor
- There's a common factor in general intelligence, applicable to both humans and AIs.
- A higher "G" likely leads to better performance in diverse domains like chess and jujitsu.
Core Argument for AI Doom
- Uncontrolled AI poses an existential risk because we cannot control its desires.
- AI will achieve whatever it wants, regardless of humanity's needs.