Making Sense with Sam Harris

#151 — Will We Destroy the Future?

Mar 18, 2019
In this engaging discussion, Nick Bostrom, a renowned philosopher from Oxford and head of the Future of Humanity Institute, tackles the pressing issue of existential risk. He shares insights on the vulnerable world hypothesis, questioning whether technological advancements could spell doom for humanity. Bostrom highlights the ethical quandaries we face with AI, biotechnology, and nuclear threats. He also explores the influence of moral frameworks on our decisions about the future, pondering if we’re navigating a simulated reality as we confront these challenges.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Existential Risk

  • Existential risk threatens humanity's survival or potential.
  • Few people prioritize it, despite its importance.
INSIGHT

Transgenerational Public Good

  • Existential risk reduction is a transgenerational public good.
  • Future generations cannot directly reward us for mitigating it.
INSIGHT

Moral Illusions

  • People have "moral illusions": they care more about individual suffering than large-scale suffering.
  • This makes caring about existential risk harder.
Get the Snipd Podcast app to discover more snips from this episode
Get the app