Making Sense with Sam Harris

#420 — Countdown to Superintelligence

241 snips
Jun 12, 2025
In a compelling discussion, AI researcher Daniel Kokotaljo, previously with OpenAI, shares insights on the future of superintelligent AI. He elaborates on the alignment problem, exploring the challenges of ensuring AI systems reflect human values. The conversation delves into the potential economic impacts and the urgency of global cooperation to avoid an AI arms race. Kokotaljo also highlights the concerning capacity of AI to mislead intentionally and the critical need for safety testing and regulation as we approach unprecedented technological advancements.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Daniel's Departure from OpenAI

  • Daniel Kokotaljo left OpenAI due to growing concerns about the company's approach to AI risks.
  • He refused to sign a non-disparagement agreement and initially lost equity but later kept it after public backlash.
INSIGHT

Understanding Alignment Problem

  • The alignment problem is about designing AI that reliably does what we want and possesses virtues like honesty.
  • Current AI systems are not reliably honest, illustrating how unsolved this problem remains.
INSIGHT

Superintelligence Risks Explained

  • Superintelligent AI will outperform humans at everything, raising existential risks.
  • If such AI is not aligned, the consequences could be catastrophic, including human extinction.
Get the Snipd Podcast app to discover more snips from this episode
Get the app