The Existential Hope Podcast

Andrew Critch on what AGI might look like in practice

Dec 11, 2025
Andrew Critch, an AI safety researcher and founder of innovative tools like NotADoctor.ai, dives into the crucial question of what we will do with AGI. He argues that it’s not just about its arrival but our choices that will dictate its impact. Critch reveals that AGI may be friendly and suggests focusing on shared moral values over perfectionism. He underlines the importance of creating helpful AI products today while advocating for cultural change through practical solutions rather than endless debate.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

What We Do With AGI Matters Most

  • The central question is not when AGI arrives but what we choose to do with it as individuals and societies.
  • Cities, families, businesses, and countries will make different decisions shaped by AGI's presence.
ANECDOTE

From Math PhD To AI Worrier-Builder

  • Andrew describes his path from a math PhD to worrying about AI after Andrew Ng's talk convinced him deep learning could lead to AGI.
  • That worry shifted him from thinking to building tools aimed at reducing extinction and societal risks.
INSIGHT

Alarmism Can Increase Risk

  • Public rhetoric often overstates the probability of an immediate treacherous turn by AGI and can be unhelpful.
  • Misleading alarmism creates social rifts that may increase existential risk rather than reduce it.
Get the Snipd Podcast app to discover more snips from this episode
Get the app