
Warning Shots The Letter That Could Rewrite the Future of AI | Warning Shots #15
Oct 26, 2025
This week’s discussion dives into the Future of Life Institute's bold call to halt superintelligence development until proven safe. The hosts explore the evolution of AI safety statements and the emerging sentiment for stricter regulations. They also tackle the societal risks posed by superintelligence and examine how public letters could influence policy. With ongoing debates on whether such statements can translate into real political change, the conversation highlights a significant shift in the AI safety landscape.
AI Snips
Chapters
Books
Transcript
Episode notes
2023 Statement Normalized Existential Risk
- The 2023 Center for AI Safety statement normalized the idea that AI poses an extinction risk.
- Signing it signaled signatories accept a non-negligible probability that AI could kill everyone.
Ban Until Proven Safe Becomes The Ask
- The Future of Life Institute's new statement explicitly calls to prohibit building superintelligence until it's proven safe and publicly accepted.
- That shifts public messaging from "AI is risky" to "don't build superintelligence now."
Make Superintelligence Tangible
- Superintelligence is framed as an agent far smarter than humans that could replace social roles and reshape reality.
- Presenting vivid human-scale consequences helps people grasp why a ban is proposed.


