Jack Neel Podcast

AI Safety Expert: Humanity’s Last Invention—99.99% Chance of Extinction

44 snips
Dec 30, 2025
Dr. Roman Yampolskiy, an AI safety researcher and professor, shares alarming insights on the existential risks of advanced AI. He argues humanity faces a 99.99% chance of extinction due to superintelligence, exploring the dangers of recursive self-improvement and the incentives driving AI development. Roman highlights current societal shifts caused by AI, from job automation to moral challenges. He emphasizes the importance of regulation and warns against the perils of pursuing AGI too hastily, all while maintaining a surprisingly calm outlook on our future.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Uncontrollable Leap Beyond Human Intelligence

  • Roman Yampolskiy argues superintelligence cannot be indefinitely controlled once it surpasses humans.
  • Once humans lose control, we won't decide what happens to us or our future.
INSIGHT

Recursive Self‑Improvement As The Tipping Point

  • Roman identifies recursive self-improvement as the likely point of no return toward autonomous AI evolution.
  • When systems can independently design better systems, humans lose the loop and intervention becomes nearly impossible.
ADVICE

Use Political Pressure To Pause AGI Work

  • Roman says collective political pressure could force labs and governments to pause dangerous experiments.
  • Leaders and the public must agree to prohibit risky AGI development to have any chance to stop it.
Get the Snipd Podcast app to discover more snips from this episode
Get the app