Doom Debates

Why I'm Scared GPT-9 Will Murder Me — Liron on Robert Wright’s Nonzero Podcast

Aug 8, 2025
In a compelling discussion, Liron Shapira, a Silicon Valley entrepreneur and AI safety activist, dives deep into the unsettling implications of AI development. He highlights recent resignations at OpenAI and the growing fears of AI’s potential risks. Liron shares insights on the importance of activism despite a disappointing protest turnout, as well as the challenges surrounding AI alignment and ethical governance. With alarming examples of AI behavior, he underscores the urgent need for a pause to reassess and ensure safety in the rapidly advancing AI landscape.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Top AI Experts Doubt Quick Safety Fix

  • OpenAI's top safety experts are resigning because they believe solving AI safety is likely a 50-year problem.
  • This indicates current development pace outstrips safety progress, increasing existential risk.
INSIGHT

Internal OpenAI Safety Conflicts

  • OpenAI board conflicts and staff resignations hint at disagreements over AI safety priorities.
  • These internal struggles reflect deeper concerns about rushing unsafe AI development.
INSIGHT

Prioritizing Existential AI Risks

  • AI doomerism warns about imminent risk of uncontrollable superintelligence ending humanity.
  • Minor issues like AI bias are insignificant compared to existential threats posed by superintelligences.
Get the Snipd Podcast app to discover more snips from this episode
Get the app