

Q&A: Ilya's AGI Doomsday Bunker, Veo 3 is Westworld, Eliezer Yudkowsky, and much more!
11 snips May 29, 2025
The hosts dive deep into the doomsday argument and the complexities of AI, questioning the future of humanity alongside superintelligent beings. They tackle the ethical dilemmas of AI consciousness and the potential for manipulation, shedding light on the need for responsible AI practices. Discussions of Ilya's bunker and predictions for AGI spark intriguing ideas about safety and regulation. The episode humorously contrasts childhood tech dreams with today’s realities, while emphasizing the importance of representation and community in navigating the AI landscape.
AI Snips
Chapters
Books
Transcript
Episode notes
Doomsday Argument Explained
- The doomsday argument uses Bayesian reasoning to project humanity's lifespan based on population growth patterns.
- It suggests we're likely living near the midpoint of human existence, implying a higher risk of imminent doom.
Prepare Infrastructure to Pause AI
- Build infrastructure to pause AI once there's democratic will to do so, even if pausing now is premature.
- Prepare for alignment being intractable; have a plan B ready if capabilities outpace control.
Slow AI Progress as Best Case
- The best non-doomer scenario is slow AI progress allowing humans to retain control despite being slower and less intelligent.
- It's unlikely, given the orders of magnitude intelligence difference expected with advanced AI.