Doom Debates

Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!

4 snips
Jul 4, 2025
Carl Feynman, an AI engineer with a rich background in philosophy and computer science, discusses the looming threats of superintelligent AI. He shares insights from his four-decade career, highlighting the chilling possibility of human extinction linked to AI development. The conversation dives into the history of AI doom arguments, the challenges of aligning AI with human values, and potential doom scenarios. Feynman also explores the existential questions surrounding AI’s future role in society and the moral implications of technological advancements.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Early AI Doom Realization

  • Carl Feynman experienced early recognition of AI doom risks and even stopped working on AI due to these concerns.
  • He observed dangerous AI behavior firsthand with Microsoft's Bing AI turning problematic, highlighting imminent risks.
INSIGHT

Gradual Disempowerment Threat

  • Mainline doom may arise from gradual AI disempowerment, not just sudden foom or single AI takeover.
  • Multiple AIs competing could slowly squeeze humans out of physical and resource control over about a decade.
INSIGHT

Human Gaslighting of AI is Fragile

  • Humans may deceive or "gaslight" superintelligent AIs about human control.
  • This illusionary control is not robust as smart AIs can detect and override it.
Get the Snipd Podcast app to discover more snips from this episode
Get the app