Within Reason

#129 Will MacAskill - We're Not Ready for Artificial General Intelligence

112 snips
Nov 9, 2025
Will MacAskill, a Scottish philosopher and founder of the effective altruism movement, dives deep into the looming challenges of artificial general intelligence (AGI). He discusses the potential for AGI-related doomsday scenarios, highlighting risks like loss of control and the possibility of AI-driven government coups. Will emphasizes that alignment isn’t enough and proposes urgent measures, such as tracking compute resources, to ensure safety. He also critiques the economic incentives favoring rapid AI development, cautioning that society's slow response could lead to catastrophic consequences.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AGI Is A Monumental Unprepared Transition

  • The transition to AGI will be one of the most momentous in human history and few preparations are happening now.
  • Major companies aim for AGI while almost no societal effort prepares for its consequences.
INSIGHT

Economic Forces Favor Speed Over Safety

  • Investment to build AGI vastly outweighs funding for safety and governance.
  • Economic incentives strongly favor accelerating capabilities over caution or regulation.
INSIGHT

Four Distinct Paths To Major AI Risk

  • Major AI risks split into loss of control, accelerating dangerous tech, concentrated power, and degraded collective reasoning.
  • Each route could independently produce catastrophic or dystopian consequences.
Get the Snipd Podcast app to discover more snips from this episode
Get the app