
Toby Ord
Oxford philosopher and bestselling author of The Precipice. His work focuses on the biggest picture questions facing humanity.
Top 10 podcasts with Toby Ord
Ranked by the Snipd community

377 snips
Jun 24, 2025 • 2h 48min
#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand
Toby Ord, an Oxford philosopher and bestselling author of 'The Precipice,' dives into the shifting landscape of AI development. He highlights how AI companies are moving from simply increasing model size to implementing more thoughtful reasoning processes. This transformation raises crucial questions about accessibility and the ethical dilemmas we face as AI becomes more powerful. Ord also discusses the economic implications of these changes, emphasizing the urgent need for adaptive governance to tackle the complexities of evolving AI technologies.

48 snips
Nov 7, 2018 • 37min
Fermi Paradox
Ever wondered where all the aliens are? It’s actually very weird that, as big and old as the universe is, we seem to be the only intelligent life. In this episode, Josh examines the Fermi paradox, and what it says about humanity’s place in the universe. (Original score by Point Lobo.) Interviewees: Anders Sandberg, Oxford University philosopher and co-creator of the Aestivation hypothesis; Seth Shostak, director of SETI; Toby Ord, Oxford University philosopher. Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSee omnystudio.com/listener for privacy information.

29 snips
Jun 16, 2025 • 2h 55min
Inference Scaling, AI Agents, and Moratoria (with Toby Ord)
Toby Ord, a Senior Researcher at Oxford University focused on existential risks, dives into the intriguing concept of the ‘scaling paradox’ in AI. He discusses how scaling challenges affect AI performance, particularly the diminishing returns of deep learning models. The conversation also touches on the ethical implications of AI governance and the importance of moratoria on advanced technologies. Moreover, Toby examines the shifting landscape of AI's capabilities and the potential risks for humanity, emphasizing the need for a balance between innovation and safety.

25 snips
Oct 3, 2021 • 3h 14min
One: Toby Ord on existential risks
In 2020, Oxford academic and 80,000 Hours trustee Dr Toby Ord released his book The Precipice: Existential Risk and the Future of Humanity. It's about how our long-term future could be better than almost anyone believes, but also how humanity's recklessness is putting that future at grave risk — in Toby's reckoning, a 1 in 6 chance of being extinguished this century.Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so we thought this would be a perfect introduction to the problem of existential risks.Full transcript, related links, and summary of this interviewThis episode first broadcast on the regular 80,000 Hours Podcast feed on March 7, 2020. Some related episodes include:• #81 – Ben Garfinkel on scrutinising classic AI risk arguments• #70 – Dr Cassidy Nelson on the twelve best ways to stop the next pandemic (and limit COVID-19)• #43 – Daniel Ellsberg on the creation of nuclear doomsday machines, the institutional insanity that maintains them, & how they could be dismantledSeries produced by Keiran Harris.

16 snips
Dec 10, 2021 • 1h 9min
Humanity on the precipice (Toby Ord)
Humanity could thrive for millions of years -- unless our future is cut short by an existential catastrophe. Oxford philosopher Toby Ord explains the possible existential risks we face, including climate change, pandemics, and artificial intelligence. Toby and Julia discuss what led him to take existential risk more seriously, which risks he considers underrated vs. overrated, and how to estimate the probability of existential risk.

15 snips
Sep 8, 2023 • 3h 7min
#163 – Toby Ord on the perils of maximising the good that you do
Toby Ord, a moral philosopher from the University of Oxford and a pioneer of effective altruism, discusses the complexities of maximizing good in altruistic efforts. He warns against the dangers of an all-or-nothing approach, using the FTX fallout as a cautionary tale. Toby emphasizes the importance of integrity and humility in leadership and argues for a more balanced goal: 'doing most of the good you can.' He also explores the intricate relationship between utilitarian ethics and individual character, highlighting the nuanced nature of moral decision-making.

9 snips
Mar 7, 2020 • 3h 14min
#72 - Toby Ord on the precipice and humanity's potential futures
Toby Ord, a moral philosopher at Oxford and author of 'The Precipice,' discusses humanity's precarious future. He reveals a staggering 1 in 6 chance of extinction this century due to both natural and human-made risks. Toby highlights the threat of supervolcanoes over asteroids, the alarming underfunding of global safety agreements, and the existential risks posed by AI. He emphasizes the importance of proactive measures, long-term planning, and moral dialogue to ensure a thriving future for humanity.

7 snips
Oct 25, 2023 • 27min
Highlights: #163 – Toby Ord on the perils of maximising the good that you do
Toby Ord discusses the trade-offs of maximizing one metric and the risks of optimizing AI. We explore the concept of moral trade and the impact of virtue as a multiplier on projects. Also, learn about a collection of restored and unseen Earth photographs from the Apollo program.

7 snips
Nov 14, 2018 • 37min
Natural Risks
Guests on the podcast talk about the existential risks faced by humanity, including asteroid impacts, the potential for human colonization of other planets, the runaway greenhouse effect caused by climate change, and the dangers of intelligent algorithms evolving beyond human control.

Jun 24, 2025 • 1h 19min
Existential Risk and the Future of Humanity: Lessons from AI, Pandemics, and Nuclear Threats | Toby Ord (Author of "The Precipice")
Toby Ord, a Senior Researcher at Oxford's AI Governance Initiative and author of The Precipice, dives deep into existential risks facing humanity. He argues that we face a one-in-six chance of civilization-ending catastrophe this century. The discussion delves into AI-related threats, from alignment failures to geopolitical tensions. Ord emphasizes our moral duty to future generations and reflects on COVID-19’s missed lessons. He also outlines actionable steps for individuals to help steer humanity away from potential extinction.