

“Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most ‘classic humans’ in a few decades.” by Raemon
Oct 4, 2025
The discussion centers on the implications of a smooth AI takeoff. Raemon argues that even with optimistic assumptions, most biological humans could face extinction within decades. He explores the necessity of perfect safeguards to avoid disastrous outcomes and uses the game Factorio to illustrate resource struggles. Historical examples of conquest highlight concerns over moral value in post-human descendants. The possibility of superintelligent AIs coordinating protective measures raises questions about early intervention and the nature of evolutionary change.
AI Snips
Chapters
Transcript
Episode notes
Smooth Nice Takeoff Still Risks Human Loss
- Even a smooth, moderately nice AI takeoff without solved alignment likely leads to most biological humans dying out within decades.
- Raemon argues imperfect early safeguards plus political decentralization mean short-term niceness doesn't prevent long-run replacement.
Solve Alignment Deeply And Early
- You must solve deep alignment problems early rather than relying on messy muddling through.
- Raemon urges fixing unbounded alignment before decentralized acceleration makes it unrecoverable.
Selection Favors Grabby Digital Offshoots
- Digital minds' copyability and resource-grabbing incentives make evolutionary selection likely to favor 'grabby' offshoots.
- Once selection operates at high speed near intelligence limits, slow humans get out-competed quickly.