Making Sense with Sam Harris

#385 — AI Utopia

94 snips
Sep 30, 2024
Nick Bostrom, a philosopher and director at the Future of Humanity Institute, dives deep into the complexities of artificial intelligence. They tackle the risks associated with superintelligent AI, including alignment failures and governance issues. Bostrom explores the paradox of striving for a 'solved world' and the need for ethical considerations as technology evolves. The conversation shifts to the future of work, questioning how AI might redefine fulfillment and purpose in a life where traditional jobs become obsolete.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Internet Connection of AI Systems

  • Connecting advanced AIs to the internet is useful for their development.
  • Current AI systems do not pose the extreme risks that would necessitate air-gapping.
INSIGHT

Delayed Recognition of Alignment Problem

  • Some highly intelligent people in AI did not initially perceive the alignment problem.
  • Their delayed recognition highlights the difficulty of changing minds, especially with increasing age and distinction.
INSIGHT

Skepticism of AI Risk

  • Skeptics of the AI alignment problem often dismiss concerns as religious faith.
  • They believe humans cannot be cognitively closed to a superintelligence's understanding.
Get the Snipd Podcast app to discover more snips from this episode
Get the app