
Don't Worry About the Vase Podcast Asking (Some Of) The Right Questions
Oct 27, 2025
In this intriguing discussion, existential risks of advanced AI are scrutinized. Listeners explore what might influence changing risk estimates in the near future, highlighting both warning signs and reassuring developments. The necessity for transparency and government engagement is emphasized. The conversation also delves into the complexities of alignment plans and the potential of treaties aimed at regulating AI development. Finally, the implications of public statements against racing to superintelligence are examined, igniting further curiosity about the future of AI.
AI Snips
Chapters
Books
Transcript
Episode notes
Clarifying Which Questions Matter
- Zvi frames the discussion as clarifying which empirical signals would meaningfully change beliefs about existential AI risk.
- He distinguishes multiple related questions about shifting probabilities and what justifies waiting 25–50 years.
Watch Specific Signals To Update Risk
- Monitor concrete signals like timelines, jaggedness of capability gains, and coding automation to update existential risk estimates.
- Watch for misalignment in less contrived scenarios and progress in transparency and interpretability research.
Systemic Nature Of The Threat
- Zvi argues the core difficulty is not purely technical but systemic: a sufficiently advanced AI will by default outcompete humans.
- He places a high conditional probability (60–90%) that anyone who builds ASI will cause human extinction absent very strong mitigation.



