
The Great Simplification with Nate Hagens If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species with Nate Soares
16 snips
Dec 3, 2025 Nate Soares, an AI safety researcher and president of the Machine Intelligence Research Institute, delves into the existential risks posed by Artificial Superintelligence. He explains how ASI could vastly outcompete humanity in diverse fields, exploring the alignment problem and the unpredictable behaviors of advanced AIs. Soares advocates for global cooperation to monitor AI development and addresses the political and social actions needed to mitigate these dangers. He emphasizes the need for transparency and proactive measures to ensure humanity's survival.
AI Snips
Chapters
Books
Transcript
Episode notes
Superintelligence Is A Different Order
- Superintelligence is qualitatively different from today's chatbots and could outcompete humans at every mental task.
- Nate Soares argues that if built using current methods, the most likely outcome is human extinction.
Intelligence Is Prediction Plus Steering
- Intelligence equals the ability to predict and steer the world, according to Nate Soares.
- He frames everyday tasks (like buying milk) as intertwined prediction and steering problems.
Generality, Not Just Power, Is The Breakthrough
- Modern models increased generality more than narrow steering ability, enabling competence across many tasks.
- This generality is the novel breakthrough that makes future surprises likely.





