Future of Life Institute Podcast

We're Not Ready for AGI (with Will MacAskill)

20 snips
Nov 14, 2025
Will MacAskill, a senior research fellow at Forethought and author known for his work on longtermist ethics, dives into the complexities of AI governance. He discusses moral error risks and the challenges of ensuring that AI systems reflect ethical reasoning. The conversation touches on the urgent need for space governance and how AI can reinforce biases through sycophantic behavior. MacAskill also presents the concept of 'viatopia' to emphasize flexibility in future moral choices, highlighting the importance of designing AIs for better moral reflection.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Two Distinct Longterm Priorities

  • Longterm value splits into preventing extinction and improving the future conditional on survival.
  • Will MacAskill argues improving the post-survival future can be as or more important than extinction risk mitigation.
INSIGHT

Scale, Neglect, Tractability Framework

  • MacAskill uses scale, neglectedness, and tractability to compare priorities.
  • He shows huge stakes can lie in improving future quality even with significant extinction risk.
INSIGHT

Utopias Are Fragile To Moral Errors

  • Utopias often hide single moral errors that undo most value.
  • MacAskill warns abundance plus a single catastrophic moral mistake can yield a deeply flawed future.
Get the Snipd Podcast app to discover more snips from this episode
Get the app