80,000 Hours Podcast

#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

5 snips
Oct 5, 2021
Carl Shulman, a research associate at Oxford's Future of Humanity Institute and an expert in existential risk, dives deep into the practical importance of mitigating threats to humanity. He argues that addressing risks, like pandemics and AI, is not just philosophical but a matter of common sense, given the staggering costs of potential disasters. Shulman critiques public preparedness, emphasizes proactive strategies for food security, and discusses the urgency of innovation in biosecurity, painting a vivid picture of our precarious future.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Shulman's Long-Term View

  • Carl Shulman supports long-term future preservation but rejects strong long-termism.
  • He believes other moral values, like duties of justice and family, hold weight.
INSIGHT

Cost-Benefit Analysis of Existential Risk

  • Existential risk reduction is justifiable with standard cost-benefit analysis.
  • A 1% extinction risk reduction over a century could justify a $2.2 trillion expense.
ANECDOTE

COVID-19 as an Example

  • COVID-19 illustrates why we underprepare for global risks.
  • The pandemic's $10 trillion cost and high mortality highlight political and coordination failures.
Get the Snipd Podcast app to discover more snips from this episode
Get the app