

“Contra Collier on IABIED” by Max Harms
Sep 20, 2025
Max Harms delivers a spirited rebuttal to Clara Collier's review of a provocative book. He debates the importance of FOOM, arguing that recursive self-improvement isn't the core danger. The discussion shifts to the perils of gradualism and the potential for a single catastrophic event. Harms nitpicks Collier's interpretations while defending the authors' stylistic choices. He advocates for diverse critiques and emphasizes the need for more exploration in the realm of AI safety.
AI Snips
Chapters
Books
Transcript
Episode notes
FOOM Not Required For Core Risk
- Max Harms argues that FOOM (fast takeoff) is not load-bearing for the book's core arguments about AI risk.
- The book's central points about alien minds and alignment problems stand even under gradual progress scenarios.
Core Claims Independent Of Takeoff Speed
- Harms lists the book's central claims, showing many don't rely on rapid takeoff.
- Points like AI alienness, instrumentality, and single-shot risk remain independent of takeoff speed.
Multiple Takeoffs Still Dangerous
- Multiple AIs could take off in parallel; a single dominant ASI isn't necessary for catastrophe.
- Harms cites book and fiction imagining many superhuman AIs leading to the same danger.