Don't Worry About the Vase Podcast

New Statement Calls For Not Building Superintelligence For Now

Oct 24, 2025
The discussion centers on a new statement advocating a pause in superintelligence development until safety is assured. It highlights the significant public support for regulating advanced AI and the existential risks involved. The host examines past AI pause proposals and critiques mixed reactions to the new statement, with concerns over practicality and enforcement. There's a robust debate on whether safety for superintelligence can ever truly be proven, weighing the need for cautious incremental progress against the urgency of public awareness and consensus.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Existential Risk From Superintelligence

  • Building superintelligence could plausibly cause human extinction and loss of control over the future.
  • Zvi argues we should not build it until the risk is far lower than it is now.
INSIGHT

FLI's Conditional Prohibition Explained

  • Many leading AI companies aim to build superintelligence within a decade and that raises wide-ranging risks.
  • The new FLI statement calls for a prohibition until safety consensus and public buy-in exist.
ANECDOTE

Regret Over The 2023 Pause Letter

  • In March 2023 FLI issued a pause letter calling for a six-month training pause beyond GPT-4, signed by figures like Elon Musk.
  • Zvi initially welcomed it but later called that reaction a mistake that weakened later arguments.
Get the Snipd Podcast app to discover more snips from this episode
Get the app