Tom Bilyeu's Impact Theory

AI Scientist Warns Tom: Superintelligence Will Kill Us… SOON | Dr. Roman Yampolskiy X Tom Bilyeu Impact Theory

22 snips
Nov 18, 2025
In this thought-provoking discussion, Dr. Roman Yampolskiy, an AI safety researcher and expert on existential risks, dives into the alarming implications of artificial superintelligence. He explores how close we are to achieving AGI and the uncontrollable threats it could pose. Yampolskiy discusses the dangers of AI creating recursive self-improvement and the high probability that superintelligence might endanger humanity. He also examines societal impacts, like mass unemployment, and contemplates the need for aligning AI with human values amidst rapidly evolving technology.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLMs Are Approaching Full AGI

  • Roman argues current LLMs are close to AGI across many domains but lack lifelong learning and permanent memory.
  • He estimates we're perhaps ~50% of the way to full AGI and closing gaps rapidly.
INSIGHT

Generality Breaks Testing

  • Roman highlights testing breakdowns when systems become general because edge cases become infinite.
  • He warns creative general agents are unpredictable like humans and therefore much harder to validate.
ADVICE

Prioritize Narrow Wins Over A Race To AGI

  • Avoid an arms race toward superintelligence and prioritize solving narrow high-value problems first.
  • Focus resources on narrow scientific applications rather than racing to general AI.
Get the Snipd Podcast app to discover more snips from this episode
Get the app