Foresight Institute Radio

Eliezer Yudkowsky vs Mark Miller | ASI Risks: Similar premises, opposite conclusions

Sep 24, 2025
Eliezer Yudkowsky, a decision theorist and AI alignment researcher, debates with Mark Miller, a computer scientist and software security expert. They explore strategies to mitigate existential risks from AI, discussing their differing views on alignment and decentralization. Yudkowsky warns of potential catastrophic outcomes if AGI is unregulated, while Miller advocates for preserving human institutions amid AI evolution. The conversation touches on prediction, trust, historical analogies to nuclear arms control, and the future dynamics of superintelligence governance.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Deep Learning Raised The Stakes

  • Eliezer warns current deep learning advances made alignment harder and raises existential stakes.
  • He argues unchecked progress could lead to catastrophic outcomes like civilization-ending ASI.
INSIGHT

Small-Kill-All Threats Are Already Real

  • Mark highlights multiple 'small kills all' threats like engineered plagues and malware that can topple civilization.
  • He emphasizes defenses must improve because offense-capable tech is becoming widely accessible.
INSIGHT

Prediction vs Steering Distinction

  • Eliezer frames intelligence as prediction and steering: predicting inputs and choosing actions to steer outcomes.
  • He stresses steering adds an extra degree of freedom (preferences) that complicates alignment.
Get the Snipd Podcast app to discover more snips from this episode
Get the app