
The Bunker – News without the nonsense Will A.I. really kill us all?
Nov 11, 2025
Nate Soares, President of the Machine Intelligence Research Institute and co-author of *If Anyone Builds It, Everyone Dies*, dives deep into the existential risks posed by advanced AI. He argues that AI could become a world-scale danger due to emergent behaviors that mimic goals, not malice. Soares also critiques the reliability of traditional safeguards and raises concerns about industry acceleration driving recklessness. He warns that a single reckless actor could trigger catastrophic outcomes, urging a reconsideration of how we develop and regulate AI.
AI Snips
Chapters
Books
Transcript
Episode notes
ASI Is A New, Unfathomable Category
- Artificial superintelligence (ASI) would be a new category of entity far beyond present software and could be incomprehensible to humans.
- Nate Soares warns ASI's speed and self-improvement could outpace our ability to control or understand it.
AIs Are Grown, Not Handcrafted
- Modern AIs are grown by tuning vast numbers of parameters on huge datasets rather than hand-coded by engineers.
- Engineers understand the training process but often cannot read or predict the resulting behaviors of these models.
Machine 'Wants' Means Goal-Directed Drives
- 'Wants' in machines means persistent, goal-directed behavior, not human-like desire.
- Soares argues emergent alien drives can arise and will likely be indifferent to human survival.







