
80,000 Hours Podcast
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Hosted by Rob Wiblin and Luisa Rodriguez.
Latest episodes

82 snips
Feb 25, 2025 • 3h 42min
#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value
Alan Hájek, a Professor of Philosophy at the Australian National University, shares his expertise on the perplexities of probability and decision-making. He dives deep into the St. Petersburg paradox, questioning the logic behind infinite expected value despite finite outcomes. The conversation also touches on philosophical methods, the significance of counterfactuals in understanding decisions, and the challenges of assigning probabilities to unprecedented events. Join this intriguing exploration of common sense versus philosophical reasoning.

Feb 19, 2025 • 2h 41min
#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons
Jeffrey Lewis, an expert on nuclear weapons and founder of Arms Control Wonk, debunks common misconceptions about U.S. nuclear policy. He reveals that the principle of 'mutually assured destruction' was misinterpreted and critiques how military plans suggest the U.S. aims to dominate in nuclear conflicts. Lewis also discusses the complexities of decision-making in nuclear strategy and the persistent misunderstandings stemming from rigid communication within the nuclear community, emphasizing the need for international cooperation to mitigate existential threats.

247 snips
Feb 14, 2025 • 2h 44min
#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway
Allan Dafoe, Director of Frontier Safety and Governance at Google DeepMind, dives into the unstoppable growth of technology, illustrating how societies that adopt new capabilities often outpace those that resist. He discusses the historical context of Japan's Meiji Restoration as a case study of technological competition. Dafoe highlights the importance of steering AI development responsibly, addressing safety challenges, and fostering cooperation among AI systems. He emphasizes the balance between AI innovation and necessary governance to prevent potential risks, urging collective accountability.

27 snips
Feb 12, 2025 • 57min
Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)
In this discussion, Rose Chan Loui, the founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits, dives into Elon Musk's audacious $97.4 billion bid for OpenAI. She explains the legal and ethical challenges facing OpenAI's nonprofit board amidst this pressure. The conversation highlights the complexities of balancing charitable missions with investor interests, the implications of nonprofit-to-profit transitions, and the broader societal responsibilities tied to artificial intelligence development.

86 snips
Feb 10, 2025 • 3h 12min
AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out
Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.Check out the full transcript on the 80,000 Hours website.You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:Ajeya Cotra on overrated AGI worriesHolden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models biggerIan Morris on why the future must be radically different from the presentNick Joseph on whether his companies internal safety policies are enoughRichard Ngo on what everyone gets wrong about how ML models workTom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’tCarl Shulman on why you’ll prefer robot nannies over human onesZvi Mowshowitz on why he’s against working at AI companies except in some safety rolesHugo Mercier on why even superhuman AGI won’t be that persuasiveRob Long on the case for and against digital sentienceAnil Seth on why he thinks consciousness is probably biologicalLewis Bollard on whether AI advances will help or hurt nonhuman animalsRohin Shah on whether humanity’s work ends at the point it creates AGIAnd of course, Rob and Luisa also regularly chime in on what they agree and disagree with.Chapters:Cold open (00:00:00)Rob's intro (00:00:58)Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16)Rob & Luisa: Agentic AI and designing machine people (00:24:06)Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20)Ian Morris on why we won’t end up living like The Jetsons (00:47:03)Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21)Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43)Richard Ngo on the most important misconception in how ML models work (01:03:10)Rob & Luisa: Issues Rob is less worried about now (01:07:22)Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)Michael Webb on why he’s sceptical about explosive economic growth (01:20:50)Carl Shulman on why people will prefer robot nannies over humans (01:28:25)Rob & Luisa: Should we expect AI-related job loss? (01:36:19)Zvi Mowshowitz on why he thinks it’s a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)Holden Karnofsky on the power that comes from just making models bigger (01:45:21)Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)Hugo Mercier on how AI won’t cause misinformation pandemonium (01:58:29)Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)Robert Long on whether digital sentience is possible (02:15:09)Anil Seth on why he believes in the biological basis of consciousness (02:27:21)Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)Rob & Luisa: The most interesting new argument Rob’s heard this year (02:50:37)Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)Rob's outro (03:11:02)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore

16 snips
Feb 7, 2025 • 3h 10min
#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions
Karen Levy, a seasoned expert in global health and development, discusses the pitfalls of overly fashionable concepts like 'sustainability' and 'holistic approaches' in development projects. She critiques the misguided focus on these terms, arguing that they can lead to ineffective solutions. Levy highlights the successful scaling of deworming initiatives, sharing insights on the challenges faced in implementation and funding. Through her experience with community engagement in Kenya, she emphasizes the need for realistic, evidence-based approaches to bring about meaningful change.

10 snips
Feb 4, 2025 • 1h 15min
If digital minds could suffer, how would we ever know? (Article)
The podcast dives into the intriguing debate over the moral status of AI and whether digital minds can truly experience sentience. It contrasts perspectives from experts addressing the ethical implications of creating conscious AI. The discussion raises essential questions about responsibility towards potential AI welfare and the risks of misunderstanding their capacities. The need for research into assessing AI's moral status emerges as a critical theme, highlighting both the potential risks and benefits of advancing AI technology.

Jan 31, 2025 • 2h 41min
#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems
Nova DasSarma, a computer scientist at Anthropic and co-founder of HofVarpNear Studios, dives into the critical realm of information security in AI. She discusses the immense financial stakes in AI development and the vulnerabilities inherent in training models. The conversation touches on recent high-profile breaches, like Nvidia's, and the significant security challenges posed by advanced technologies. DasSarma emphasizes the importance of collaboration in improving security protocols and ensuring safe AI alignment amid evolving threats.

52 snips
Jan 22, 2025 • 2h 26min
#138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter
Sharon Hewitt Rawlette, a philosopher and author, explores the intrinsic values of pleasure and pain. She argues that positive feelings are fundamentally valuable, while suffering holds negative intrinsic worth. The conversation dives into the historical evolution of hedonism, the complexities of moral truths, and the balance between intrinsic and instrumental values. Rawlette also examines how personal experiences shape morality and decision-making, questioning the role of genuine connections beyond mere emotional benefits.

92 snips
Jan 15, 2025 • 3h 41min
#134 Classic episode – Ian Morris on what big-picture history teaches us
Ian Morris, a bestselling historian and Willard Professor of Classics at Stanford University, dives deep into the evolution of human values over millennia. He discusses how moral landscapes have transformed from slavery being deemed natural to modern views on gender and equality. The conversation covers the interplay of warfare, energy sources, and social dynamics in shaping cultural norms. Morris argues for a provocative view that society evolves towards organizational systems that enhance survival and efficiency, revealing the fascinating parallels between past and present.