LessWrong (30+ Karma)

“Notes on fatalities from AI takeover” by ryan_greenblatt

Sep 23, 2025
Ryan Greenblatt, a researcher focused on AI risk, dives into the dark possibilities of a misaligned AI takeover. He discusses the potential for expected fatalities, estimating around 50%, and acknowledges a 25% chance of human extinction. Greenblatt explores how small motivations in AIs might prevent deaths during industrial expansion but warns of scenarios where these motives could fail. He evaluates how takeover strategies may directly lead to fatalities and concludes that while active kill motivations are unlikely, they still warrant vigilance.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Takeover Doesn't Imply Total Extinction

  • Misaligned AI takeover need not entail near-certain human extinction; moderate disagreements here don't strongly change actions.
  • Ryan Greenblatt finds extreme claims of near-universal death implausible and frames the question probabilistically.
INSIGHT

A Threefold Framework For Fatalities

  • Greenblatt estimates ~50% expected fatalities conditional on takeover and ~25% chance of literal human extinction.
  • He breaks causes into takeover strategies, industrial expansion, and active intent to kill.
INSIGHT

Keeping People Alive Is Cheap Compared To Cosmic Scale

  • Industrial expansion would kill humans by default absent any AI motivation to preserve them, but keeping humans alive is cheap relative to cosmic resources.
  • Small motivations could suffice to avoid many deaths because delays cost a tiny fraction of future accessible resources.
Get the Snipd Podcast app to discover more snips from this episode
Get the app