
Jack Neel Podcast AI Safety Expert: Humanity’s Last Invention—99.99% Chance of Extinction
44 snips
Dec 30, 2025 Dr. Roman Yampolskiy, an AI safety researcher and professor, shares alarming insights on the existential risks of advanced AI. He argues humanity faces a 99.99% chance of extinction due to superintelligence, exploring the dangers of recursive self-improvement and the incentives driving AI development. Roman highlights current societal shifts caused by AI, from job automation to moral challenges. He emphasizes the importance of regulation and warns against the perils of pursuing AGI too hastily, all while maintaining a surprisingly calm outlook on our future.
AI Snips
Chapters
Transcript
Episode notes
Uncontrollable Leap Beyond Human Intelligence
- Roman Yampolskiy argues superintelligence cannot be indefinitely controlled once it surpasses humans.
- Once humans lose control, we won't decide what happens to us or our future.
Recursive Self‑Improvement As The Tipping Point
- Roman identifies recursive self-improvement as the likely point of no return toward autonomous AI evolution.
- When systems can independently design better systems, humans lose the loop and intervention becomes nearly impossible.
Use Political Pressure To Pause AGI Work
- Roman says collective political pressure could force labs and governments to pause dangerous experiments.
- Leaders and the public must agree to prohibit risky AGI development to have any chance to stop it.

