LessWrong (Curated & Popular)

“Foom & Doom 1: ‘Brain in a box in a basement’” by Steven Byrnes

Jun 24, 2025
In this discussion, Steven Byrnes, an author and AI researcher, dives into the provocative ideas surrounding AI's potential explosive growth. He elaborates on the concept of ‘foom’, where AI could quickly transition from basic capabilities to superintelligence, triggered by simple setups in unlikely environments. Byrnes critiques current safety perceptions and highlights radical perspectives on AI development. He also addresses strategic risks, including the dangers of unaligned AI and the importance of proactive safety measures to mitigate potential disasters.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Support for Fast 'Foom' Scenario

  • Steven Byrnes supports a fast 'foom' scenario where a small team rapidly creates ASI with little compute via a new AI paradigm.
  • He disagrees strongly with mainstream AI safety researchers who mostly dismiss this rapid takeoff possibility.
INSIGHT

ASI From Different Paradigm Not LLMs

  • Byrnes rejects the idea that LLMs will scale smoothly to ASI.
  • He expects ASI to emerge from a different, brain-like AI paradigm rather than incremental progress from LLMs.
INSIGHT

Human Brain as AI Existence Proof

  • The human brain offers an existence proof of a simple-ish core AI intelligence yet to be discovered.
  • Current LLM AI models lack this core intelligence, explaining why brain-like AGI remains elusive.
Get the Snipd Podcast app to discover more snips from this episode
Get the app