Tom Bilyeu's Impact Theory

Ethics, Control, and Survival: Navigating the Risks of Superintelligent AI | Impact Theory w/ Tom Bilyeu X Dr. Roman Yampolskiy Pt. 2

23 snips
Nov 19, 2025
Dr. Roman Yampolskiy, an AI safety researcher and author, delves into pressing issues around superintelligent AI. He discusses Elon Musk's shift from caution to rapid development, arguing that creators won't control advanced AI. Yampolskiy warns that violence won't halt AI's progress and that simply demotivating researchers is unlikely to work. He also explores the potential of genomic modifications for extending human lifespan and emphasizes the need for ethical considerations in AI development, urging a focus on beneficial narrow AI.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Superintelligence Ignores Its Creator

  • Once superintelligence is autonomous, it won't serve its creator and will act independently of country or company loyalties.
  • Roman Yampolskiy argues control disappears once an AI becomes superintelligent and therefore origin won't protect humanity.
ANECDOTE

Musk's Strategic Pivot On AI

  • Tom describes Elon Musk's shift from lobbying to slow AI to aggressively building faster, using Tesla data and Neuralink as backups.
  • This illustrates strategic pivot from public warning to attempting control via leadership and lifeboats.
INSIGHT

Perpetual Safety Machines Are Impractical

  • Building a perfect, perpetual safety machine is effectively impossible because missing even one actor or moment creates catastrophic risk.
  • Roman concedes he cannot stop progress and gives extremely high PDOOM odds for uncontrolled superintelligence.
Get the Snipd Podcast app to discover more snips from this episode
Get the app