Bloggingheads.tv

Two Visions of AI Apocalypse (Robert Wright & David Krueger)

Sep 26, 2025
David Krueger, an AI researcher from the University of Montreal, explores the hidden dangers of advanced AI. He discusses Eliezer Yudkowsky's doomsday scenarios and his own concept of 'gradual disempowerment,' where humans slowly lose control to AI due to competitive pressures. Krueger emphasizes the opaque nature of deep learning, the risks of goal misgeneralization, and the potential for AIs to gain economic power. He warns of the urgent need for an AI-resistance movement to preserve human influence in decision-making.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Why Intelligence Is Especially Powerful

  • Intelligence equals prediction plus planning and steering and can be far more powerful than human intelligence.
  • Deep learning systems are grown, not crafted, making their inner workings hard to understand.
INSIGHT

The Opacity Problem In Deep Learning

  • Modern deep learning systems are opaque: we nudge vast parameter sets toward desired behavior without understanding why they work.
  • This opacity produces alien concepts and surprising jailbreaks that defeat safeguards.
INSIGHT

Generalization Today, Uncertainty Tomorrow

  • Current models generalize and show emergent abilities, but we lack clarity on what they still miss compared to humans.
  • Future AI may combine deep learning with other ideas and require moderate breakthroughs to reach human-level generality.
Get the Snipd Podcast app to discover more snips from this episode
Get the app