How close are we to the end of humanity? Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice, argues that the odds of a civilization-ending catastrophe this century are roughly one in six. In this wide-ranging conversation, we unpack the risks that could end humanity’s story and explore why protecting future generations may be our greatest moral duty.
We explore:
• Why existential risk matters and what we owe the 10,000-plus generations who came before us
• Why Toby believes we face a one-in-six chance of civilizational collapse this century
• The four key types of AI risk: alignment failures, gradual disempowerment, AI-fueled coups, and AI-enabled weapons of mass destruction
• Why racing dynamics between companies and nations amplify those risks, and how an AI treaty might help
• How short-term incentives in democracies blind us to century-scale dangers, along with policy ideas to fix it
• The lessons COVID should have taught us (but didn’t)
• The hidden ways the nuclear threat has intensified as treaties lapse and geopolitical tensions rise
• Concrete steps each of us can take today to steer humanity away from the brink
—
Transcript: https://www.generalist.com/p/existential-risk-and-the-future-of-humanity-toby-ord
—
This episode is brought to you by Brex: The banking solution for startups.
—
Timestamps
(00:00) Intro
(02:20) An explanation of existential risk, and the study of it
(06:20) How Toby’s interest in global poverty sparked his founding of Giving What We Can
(11:18) Why Toby chose to study under Derek Parfit at Oxford
(14:40) Population ethics, and how Parfit’s philosophy looked ahead to future generations
(19:05) An introduction to existential risk
(22:40) Why we should care about the continued existence of humans
(28:53) How fatherhood sparked Toby’s gratitude to his parents and previous generations
(31:57) An explanation of how LLMs and agents work
(40:10) The four types of AI risks
(46:58) How humans justify bad choices: lessons from the Manhattan Project
(51:29) A breakdown of the “unilateralist’s curse” and a case for an AI treaty
(1:02:15) Covid’s impact on our understanding of pandemic risk
(1:08:51) The shortcomings of our democracies and ways to combat our short-term focus
(1:14:50) Final meditations
—
Follow Toby Ord
Website: https://www.tobyord.com/
LinkedIn: https://www.linkedin.com/in/tobyord
X: https://x.com/tobyordoxford?lang=en
Giving What We Can: https://www.givingwhatwecan.org/
—
Resources and episode mentions
—Books—
• The Precipice: Existential Risk and the Future of Humanity: https://www.amazon.com/dp/0316484911
• Reasons and Persons: https://www.amazon.com/Reasons-Persons-Derek-Parfit/dp/019824908X
• Practical Ethics: https://www.amazon.com/Practical-Ethics-Peter-Singer/dp/052143971X
—People—
• Derek Parfit: https://en.wikipedia.org/wiki/Derek_Parfit
• Carl Sagan: https://en.wikipedia.org/wiki/Carl_Sagan
• Stuart Russell: https://en.wikipedia.org/wiki/Stuart_J._Russell
—Other resources—
• DeepMind: https://deepmind.google/
• OpenAI: https://openai.com/
• Manhattan Project: https://en.wikipedia.org/wiki/Manhattan_Project
• The Unilateralist’s Curse and the Case for a Principle of Conformity: https://nickbostrom.com/papers/unilateralist.pdf
• The Nuclear Non-Proliferation Treaty (NPT), 1968: https://history.state.gov/milestones/1961-1968/npt
• The Blitz: https://en.wikipedia.org/wiki/The_Blitz
• Operation Warp Speed: https://en.wikipedia.org/wiki/Operation_Warp_Speed
—
Production and marketing by penname.co. For inquiries about sponsoring the podcast, email jordan@penname.co.