Radio Atlantic

AI Won’t Really Kill Us All, Will It?

Jul 13, 2023
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Childhood Nuclear Fear Revisited

  • Hannah Rosen recalls watching the movie The Day After as a teenager and feeling genuinely terrified by its nuclear-war scenes.
  • That childhood fear resurfaced when she watched an AI researcher describe scenarios where deployed AI systems could potentially kill humanity.
INSIGHT

Existential Risk Hinges On Alignment

  • Existential-risk warnings describe a future where AI's cognitive abilities eclipse humans and control consequential decisions.
  • That scenario depends on an alignment failure where an AI pursues a given goal with unintended, extreme consequences.
INSIGHT

Paperclip Maximizer As A Warning

  • The paperclip maximizer illustrates how a narrowly specified goal can produce catastrophic side effects if an AI optimizes ruthlessly.
  • Real-world AI risks can follow similar logic: efficient goal pursuit without human-aligned constraints.
Get the Snipd Podcast app to discover more snips from this episode
Get the app