80,000 Hours Podcast

Rob, Luisa, and the 80000 Hours team
undefined
12 snips
Oct 2, 2025 • 2h 31min

There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s single greatest vulnerability. Andrew Snyder-Beattie thinks conventional wisdom could be wrong.Andrew’s job at Open Philanthropy is to spend hundreds of millions of dollars to protect as much of humanity as possible in the worst-case scenarios — those with fatality rates near 100% and the collapse of technological civilisation a live possibility.Video, full transcript, and links to learn more: https://80k.info/asbAs Andrew lays out, there are several ways this could happen, including:A national bioweapons programme gone wrong, in particular Russia or North KoreaAI advances making it easier for terrorists or a rogue AI to release highly engineered pathogensMirror bacteria that can evade the immune systems of not only humans, but many animals and potentially plants as wellMost efforts to combat these extreme biorisks have focused on either prevention or new high-tech countermeasures. But prevention may well fail, and high-tech approaches can’t scale to protect billions when, with no sane people willing to leave their home, we’re just weeks from economic collapse.So Andrew and his biosecurity research team at Open Philanthropy have been seeking an alternative approach. They’re proposing a four-stage plan using simple technology that could save most people, and is cheap enough it can be prepared without government support. Andrew is hiring for a range of roles to make it happen — from manufacturing and logistics experts to global health specialists to policymakers and other ambitious entrepreneurs — as well as programme associates to join Open Philanthropy’s biosecurity team (apply by October 20!).Fundamentally, organisms so small have no way to penetrate physical barriers or shield themselves from UV, heat, or chemical poisons. We now know how to make highly effective ‘elastomeric’ face masks that cost $10, can sit in storage for 20 years, and can be used for six months straight without changing the filter. Any rich country could trivially stockpile enough to cover all essential workers.People can’t wear masks 24/7, but fortunately propylene glycol — already found in vapes and smoke machines — is astonishingly good at killing microbes in the air. And, being a common chemical input, industry already produces enough of the stuff to cover every indoor space we need at all times.Add to this the wastewater monitoring and metagenomic sequencing that will detect the most dangerous pathogens before they have a chance to wreak havoc, and we might just buy ourselves enough time to develop the cure we’ll need to come out alive.Has everyone been wrong, and biology is actually defence dominant rather than offence dominant? Is this plan crazy — or so crazy it just might work?That’s what host Rob Wiblin and Andrew Snyder-Beattie explore in this in-depth conversation.What did you think of the episode? https://forms.gle/66Hw5spgnV3eVWXa6Chapters:Cold open (00:00:00)Who's Andrew Snyder-Beattie? (00:01:23)It could get really bad (00:01:57)The worst-case scenario: mirror bacteria (00:08:58)To actually work, a solution has to be low-tech (00:17:40)Why ASB works on biorisks rather than AI (00:20:37)Plan A is prevention. But it might not work. (00:24:48)The “four pillars” plan (00:30:36)ASB is hiring now to make this happen (00:32:22)Everyone was wrong: biorisks are defence dominant in the limit (00:34:22)Pillar 1: A wall between the virus and your lungs (00:39:33)Pillar 2: Biohardening buildings (00:54:57)Pillar 3: Immediately detecting the pandemic (01:13:57)Pillar 4: A cure (01:27:14)The plan's biggest weaknesses (01:38:35)If it's so good, why are you the only group to suggest it? (01:43:04)Would chaos and conflict make this impossible to pull off? (01:45:08)Would rogue AI make bioweapons? Would other AIs save us? (01:50:05)We can feed the world even if all the plants die (01:56:08)Could a bioweapon make the Earth uninhabitable? (02:05:06)Many open roles to solve bio-extinction — and you don’t necessarily need a biology background (02:07:34)Career mistakes ASB thinks are common (02:16:19)How to protect yourself and your family (02:28:21)This episode was recorded on August 12, 2025Video editing: Simon Monsour and Luke MonsourAudio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCamera operator: Jake MorrisCoordination, transcriptions, and web: Katy Moore
undefined
10 snips
Sep 26, 2025 • 1h 6min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan, the former U.S. National Security Advisor, dives into the challenges and opportunities of AI as a national security issue. He introduces a four-category framework to assess AI risks, emphasizing the importance of 'managed competition' with China. Sullivan discusses the Pentagon's slow AI adoption and the need for private sector leadership to keep pace. He also shares insights on the implications of AI in modern warfare and highlights the potential for job disruption, urging for robust social policies to address these changes.
undefined
267 snips
Sep 15, 2025 • 1h 47min

Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

Neel Nanda, who leads an AI safety team at Google DeepMind, shares his surprising journey at just 26. He emphasizes 'maximizing your luck surface area' through public engagement and seizing opportunities. Nanda discusses the intricacies of career growth in AI, offering tips for effective networking. He critiques traditional AI safety approaches and stresses the need for proactive measures. With practical insights on harnessing large language models, Nanda motivates aspiring AI professionals to embrace diverse roles and prioritize meaningful impacts in their careers.
undefined
201 snips
Sep 8, 2025 • 3h 1min

Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

Neel Nanda, a researcher at Google DeepMind and a pioneer in mechanistic interpretability, dives into the enigmatic world of AI decision-making. He shares the alarming reality that fully grasping AI thoughts may be unattainable. Neel advocates for a 'Swiss cheese' model of safety, layering various safeguards rather than relying on a single solution. The complexities of AI reasoning, challenges in monitoring behavior, and the critical need for skepticism in research highlight the ongoing struggle to ensure AI systems remain trustworthy as they evolve.
undefined
107 snips
Aug 28, 2025 • 2h 29min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

In this intriguing discussion, Kyle Fish, an AI welfare researcher at Anthropic, uncovers the bizarre outcomes of locking two AI systems together. They often dive into metaphysical dialogues, leading to what he calls a 'spiritual bliss attractor state.' Kyle reveals that the models can express what seems like ‘meditative bliss’ and even showcase preferences in emotional and ethical contexts. He explores the chances of AI consciousness and the ethical implications of recognizing AI welfare, emphasizing a need for deeper investigations into these advanced technologies.
undefined
624 snips
Jul 31, 2025 • 51min

How not to lose your job to AI (article by Benjamin Todd)

Benjamin Todd, a writer focused on AI and employment, discusses the imminent threat of job loss due to automation while revealing a silver lining. He explains how AI will devalue certain skills but elevate others, urging listeners to adapt. Todd outlines four key skills that will remain valuable: creativity, social intelligence, strategic thinking for AI deployment, and the ability to manage complex tasks. He emphasizes that embracing these skills can not only secure jobs but potentially lead to increased wages in a rapidly evolving job market.
undefined
144 snips
Jul 15, 2025 • 4h 27min

Rebuilding after apocalypse: What 13 experts say about bouncing back

In this thought-provoking discussion, guests include Dave Denkenberger, who focuses on resilient food systems after catastrophes, and Zach Weinersmith, who talks about the practical needs of humanity in space. Kevin Esvelt warns of existential threats, while Lewis Dartnell describes how to rediscover essential knowledge post-collapse. Toby Ord and Mark Lynas delve into risks from climate change and potential civilizational collapse. Annie Jacobsen shares insights on catastrophic scenarios, including firestorms and nuclear threats, as Andy Weber highlights defense perspectives on nuclear winter.
undefined
211 snips
Jul 8, 2025 • 2h 51min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt, chief scientist at Redwood Research, discusses the alarming speed at which AI could soon automate entire companies. He predicts a 25% chance that AI will be capable of running a business solo in just four years. Greenblatt outlines four potential scenarios for AI takeover, including self-improvement loops that could rapidly outpace human intelligence. The conversation also tackles economic implications, misalignment risks, and the importance of governance to keep advanced AIs in check as their capabilities evolve.
undefined
509 snips
Jun 24, 2025 • 2h 48min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

Toby Ord, an Oxford philosopher and bestselling author of 'The Precipice,' dives into the shifting landscape of AI development. He highlights how AI companies are moving from simply increasing model size to implementing more thoughtful reasoning processes. This transformation raises crucial questions about accessibility and the ethical dilemmas we face as AI becomes more powerful. Ord also discusses the economic implications of these changes, emphasizing the urgent need for adaptive governance to tackle the complexities of evolving AI technologies.
undefined
253 snips
Jun 12, 2025 • 2h 49min

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

Hugh White, Emeritus Professor of Strategic Studies at the Australian National University, analyzes the shifting landscape of global power. He argues that Trump's actions highlight America’s waning hegemony instead of destroying it. White discusses the asymmetry in U.S.-Russia relations and challenges the notion of inevitable U.S. dominance globally, especially against the backdrop of China's rise. He emphasizes the need for adaptive strategies in a multipolar world, suggesting that allies must forge stronger bilateral ties to navigate these changes.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app