Making Sense with Sam Harris

#434 — Can We Survive AI?

34 snips
Sep 16, 2025
In this engaging discussion, AI researcher Eliezer Yudkowsky and MIRI’s Executive Director Nate Soares delve into their provocative book on the existential risks of superintelligent AI. They unpack the alignment problem, addressing the unsettling possibility that AI could develop survival instincts. The duo critiques the skepticism among tech leaders regarding superintelligent AI dangers and explores real-world consequences of current AI systems. With insights on ethical implications and the unpredictability of AI behavior, they warn that unchecked AI advancements may lead to a catastrophic outcome for humanity.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

How Early Reading Sparked Deep Concern

  • Eliezer traced his concern about AI back to reading Vernor Vinge and realizing that smarter-than-human systems break our ability to predict the future.
  • He moved from naive optimism to studying alignment seriously after seeing how fragile assumptions were.
ANECDOTE

MIRI's Shift From Research To Warning

  • Nate described joining MIRI after reading Eliezer's arguments and eventually running the organization to pursue AI safety.
  • He recounted MIRI's shift from trying to solve alignment themselves to warning the world as capability progress outpaced safety progress.
INSIGHT

Alignment Means Where An AI Steers

  • Alignment asks which part of reality an AI steers, not merely whether it obeys orders.
  • A system can succeed at alignment narrowly while steering the world in ways programmers didn't intend.
Get the Snipd Podcast app to discover more snips from this episode
Get the app