80,000 Hours Podcast

Rob, Luisa, and the 80000 Hours team
undefined
96 snips
Sep 15, 2025 • 1h 47min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

Neel Nanda, who leads an AI safety team at Google DeepMind, shares his surprising journey at just 26. He emphasizes 'maximizing your luck surface area' through public engagement and seizing opportunities. Nanda discusses the intricacies of career growth in AI, offering tips for effective networking. He critiques traditional AI safety approaches and stresses the need for proactive measures. With practical insights on harnessing large language models, Nanda motivates aspiring AI professionals to embrace diverse roles and prioritize meaningful impacts in their careers.
undefined
107 snips
Sep 8, 2025 • 3h 1min

Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

Neel Nanda, a researcher at Google DeepMind and a pioneer in mechanistic interpretability, dives into the enigmatic world of AI decision-making. He shares the alarming reality that fully grasping AI thoughts may be unattainable. Neel advocates for a 'Swiss cheese' model of safety, layering various safeguards rather than relying on a single solution. The complexities of AI reasoning, challenges in monitoring behavior, and the critical need for skepticism in research highlight the ongoing struggle to ensure AI systems remain trustworthy as they evolve.
undefined
86 snips
Aug 28, 2025 • 2h 29min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

In this intriguing discussion, Kyle Fish, an AI welfare researcher at Anthropic, uncovers the bizarre outcomes of locking two AI systems together. They often dive into metaphysical dialogues, leading to what he calls a 'spiritual bliss attractor state.' Kyle reveals that the models can express what seems like ‘meditative bliss’ and even showcase preferences in emotional and ethical contexts. He explores the chances of AI consciousness and the ethical implications of recognizing AI welfare, emphasizing a need for deeper investigations into these advanced technologies.
undefined
512 snips
Jul 31, 2025 • 51min

How not to lose your job to AI (article by Benjamin Todd)

Benjamin Todd, a writer focused on AI and employment, discusses the imminent threat of job loss due to automation while revealing a silver lining. He explains how AI will devalue certain skills but elevate others, urging listeners to adapt. Todd outlines four key skills that will remain valuable: creativity, social intelligence, strategic thinking for AI deployment, and the ability to manage complex tasks. He emphasizes that embracing these skills can not only secure jobs but potentially lead to increased wages in a rapidly evolving job market.
undefined
144 snips
Jul 15, 2025 • 4h 27min

Rebuilding after apocalypse: What 13 experts say about bouncing back

In this thought-provoking discussion, guests include Dave Denkenberger, who focuses on resilient food systems after catastrophes, and Zach Weinersmith, who talks about the practical needs of humanity in space. Kevin Esvelt warns of existential threats, while Lewis Dartnell describes how to rediscover essential knowledge post-collapse. Toby Ord and Mark Lynas delve into risks from climate change and potential civilizational collapse. Annie Jacobsen shares insights on catastrophic scenarios, including firestorms and nuclear threats, as Andy Weber highlights defense perspectives on nuclear winter.
undefined
209 snips
Jul 8, 2025 • 2h 51min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt, chief scientist at Redwood Research, discusses the alarming speed at which AI could soon automate entire companies. He predicts a 25% chance that AI will be capable of running a business solo in just four years. Greenblatt outlines four potential scenarios for AI takeover, including self-improvement loops that could rapidly outpace human intelligence. The conversation also tackles economic implications, misalignment risks, and the importance of governance to keep advanced AIs in check as their capabilities evolve.
undefined
481 snips
Jun 24, 2025 • 2h 48min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

Toby Ord, an Oxford philosopher and bestselling author of 'The Precipice,' dives into the shifting landscape of AI development. He highlights how AI companies are moving from simply increasing model size to implementing more thoughtful reasoning processes. This transformation raises crucial questions about accessibility and the ethical dilemmas we face as AI becomes more powerful. Ord also discusses the economic implications of these changes, emphasizing the urgent need for adaptive governance to tackle the complexities of evolving AI technologies.
undefined
216 snips
Jun 12, 2025 • 2h 49min

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

Hugh White, Emeritus Professor of Strategic Studies at the Australian National University, analyzes the shifting landscape of global power. He argues that Trump's actions highlight America’s waning hegemony instead of destroying it. White discusses the asymmetry in U.S.-Russia relations and challenges the notion of inevitable U.S. dominance globally, especially against the backdrop of China's rise. He emphasizes the need for adaptive strategies in a multipolar world, suggesting that allies must forge stronger bilateral ties to navigate these changes.
undefined
317 snips
Jun 2, 2025 • 3h 47min

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

Beth Barnes, CEO of METR, dives into the remarkable advancements in AI capabilities, noting that models now have a 50% success rate in tackling complex tasks originally designed for expert humans. She reveals the staggering trend of AI's planning horizon doubling every seven months. Beth emphasizes that AI could soon self-improve, potentially within two years. The conversation also critiques the urgency of addressing AI safety and regulatory challenges as technology evolves, urging a proactive approach while acknowledging the societal implications of AI advancements.
undefined
182 snips
May 23, 2025 • 3h 35min

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

Megan Barrett, an insect neurobiologist, discusses the evolutionary case for insect sentience. Jeff Sebo, specializing in ethics, explores moral considerations for AI systems. David Chalmers contemplates the feasibility of artificial consciousness, while Bob Fischer examines the moral weight of animals like chickens. Cameron Meyer Shorb highlights the suffering of wild animals, and Jonathan Birch warns about the nuances of newborn pain. The conversation challenges our understanding of consciousness across species and prompts deep questions about our moral responsibilities.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app