Future of Life Institute Podcast

We're Not Ready for AGI (with Will MacAskill)

110 snips
Nov 14, 2025
Will MacAskill, a senior research fellow at Forethought and author known for his work on longtermist ethics, dives into the complexities of AI governance. He discusses moral error risks and the challenges of ensuring that AI systems reflect ethical reasoning. The conversation touches on the urgent need for space governance and how AI can reinforce biases through sycophantic behavior. MacAskill also presents the concept of 'viatopia' to emphasize flexibility in future moral choices, highlighting the importance of designing AIs for better moral reflection.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ADVICE

Prompt Models To Ask And Guide

  • Make models guide users with questions and context before giving moral answers.
  • Train AIs to request more info and to walk users through competing arguments for thoughtful reflection.
INSIGHT

Models Show Incoherent Moral Stances

  • Current LLMs are incoherent and show inconsistent metaethical positions.
  • MacAskill notes models may claim moral realism yet respond subjectivistically on concrete cases.
INSIGHT

AI Can Create Indefinite Lock-In

  • AI enables unprecedented persistent path dependence via treaty bots and enforceable code.
  • MacAskill warns that AI-embedded constitutions could entrench values indefinitely.
Get the Snipd Podcast app to discover more snips from this episode
Get the app