Robert Wright's Nonzero

Two Visions of AI Apocalypse (Robert Wright & David Krueger)

7 snips
Sep 26, 2025
In this captivating discussion, AI researcher David Krueger, a professor at the University of Montréal, dives into existential risks posed by AI. He explores Eliezer Yudkowsky's alarming visions of uncontrollable superintelligent AIs and challenges the assumption that AIs will always strive for infinite goals. Krueger introduces his idea of 'gradual disempowerment,' discussing how economic pressures can lead to a slow ceding of decision-making power to AI, potentially undermining human agency in society. The interplay of AI safety, ownership, and cultural shifts adds layers of intrigue to the conversation.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Intelligence As Prediction And Steering

  • Intelligence equals prediction plus planning and can exceed human abilities across many domains.
  • David Krueger warns that greater predictive-steering power makes AI politically and economically transformative.
INSIGHT

We 'Grow' Models We Don't Fully Understand

  • Modern deep learning systems are 'grown' rather than crafted, producing behaviors we often do not understand.
  • Krueger stresses that inscrutable internal representations make safe control and robust generalization difficult.
INSIGHT

Current Models Generalize — But Gaps Remain

  • Current AIs already generalize beyond simple memorization and show creative, flexible behavior.
  • Krueger cautions that we lack clarity on which human-like capacities are present and which gaps require new breakthroughs.
Get the Snipd Podcast app to discover more snips from this episode
Get the app