80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
9 snips
Feb 19, 2025 • 2h 41min

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

Jeffrey Lewis, an expert on nuclear weapons and founder of Arms Control Wonk, debunks common misconceptions about U.S. nuclear policy. He reveals that the principle of 'mutually assured destruction' was misinterpreted and critiques how military plans suggest the U.S. aims to dominate in nuclear conflicts. Lewis also discusses the complexities of decision-making in nuclear strategy and the persistent misunderstandings stemming from rigid communication within the nuclear community, emphasizing the need for international cooperation to mitigate existential threats.
undefined
130 snips
Feb 14, 2025 • 2h 44min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Allan Dafoe, Director of Frontier Safety and Governance at Google DeepMind, dives into the unstoppable growth of technology, illustrating how societies that adopt new capabilities often outpace those that resist. He discusses the historical context of Japan's Meiji Restoration as a case study of technological competition. Dafoe highlights the importance of steering AI development responsibly, addressing safety challenges, and fostering cooperation among AI systems. He emphasizes the balance between AI innovation and necessary governance to prevent potential risks, urging collective accountability.
undefined
27 snips
Feb 12, 2025 • 57min

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

In this discussion, Rose Chan Loui, the founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits, dives into Elon Musk's audacious $97.4 billion bid for OpenAI. She explains the legal and ethical challenges facing OpenAI's nonprofit board amidst this pressure. The conversation highlights the complexities of balancing charitable missions with investor interests, the implications of nonprofit-to-profit transitions, and the broader societal responsibilities tied to artificial intelligence development.
undefined
84 snips
Feb 10, 2025 • 3h 12min

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.Check out the full transcript on the 80,000 Hours website.You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:Ajeya Cotra on overrated AGI worriesHolden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models biggerIan Morris on why the future must be radically different from the presentNick Joseph on whether his companies internal safety policies are enoughRichard Ngo on what everyone gets wrong about how ML models workTom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’tCarl Shulman on why you’ll prefer robot nannies over human onesZvi Mowshowitz on why he’s against working at AI companies except in some safety rolesHugo Mercier on why even superhuman AGI won’t be that persuasiveRob Long on the case for and against digital sentienceAnil Seth on why he thinks consciousness is probably biologicalLewis Bollard on whether AI advances will help or hurt nonhuman animalsRohin Shah on whether humanity’s work ends at the point it creates AGIAnd of course, Rob and Luisa also regularly chime in on what they agree and disagree with.Chapters:Cold open (00:00:00)Rob's intro (00:00:58)Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16)Rob & Luisa: Agentic AI and designing machine people (00:24:06)Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20)Ian Morris on why we won’t end up living like The Jetsons (00:47:03)Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21)Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43)Richard Ngo on the most important misconception in how ML models work (01:03:10)Rob & Luisa: Issues Rob is less worried about now (01:07:22)Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)Michael Webb on why he’s sceptical about explosive economic growth (01:20:50)Carl Shulman on why people will prefer robot nannies over humans (01:28:25)Rob & Luisa: Should we expect AI-related job loss? (01:36:19)Zvi Mowshowitz on why he thinks it’s a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)Holden Karnofsky on the power that comes from just making models bigger (01:45:21)Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)Hugo Mercier on how AI won’t cause misinformation pandemonium (01:58:29)Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)Robert Long on whether digital sentience is possible (02:15:09)Anil Seth on why he believes in the biological basis of consciousness (02:27:21)Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)Rob & Luisa: The most interesting new argument Rob’s heard this year (02:50:37)Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)Rob's outro (03:11:02)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore
undefined
16 snips
Feb 7, 2025 • 3h 10min

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

Karen Levy, a seasoned expert in global health and development, discusses the pitfalls of overly fashionable concepts like 'sustainability' and 'holistic approaches' in development projects. She critiques the misguided focus on these terms, arguing that they can lead to ineffective solutions. Levy highlights the successful scaling of deworming initiatives, sharing insights on the challenges faced in implementation and funding. Through her experience with community engagement in Kenya, she emphasizes the need for realistic, evidence-based approaches to bring about meaningful change.
undefined
10 snips
Feb 4, 2025 • 1h 15min

If digital minds could suffer, how would we ever know? (Article)

The podcast dives into the intriguing debate over the moral status of AI and whether digital minds can truly experience sentience. It contrasts perspectives from experts addressing the ethical implications of creating conscious AI. The discussion raises essential questions about responsibility towards potential AI welfare and the risks of misunderstanding their capacities. The need for research into assessing AI's moral status emerges as a critical theme, highlighting both the potential risks and benefits of advancing AI technology.
undefined
Jan 31, 2025 • 2h 41min

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

Nova DasSarma, a computer scientist at Anthropic and co-founder of HofVarpNear Studios, dives into the critical realm of information security in AI. She discusses the immense financial stakes in AI development and the vulnerabilities inherent in training models. The conversation touches on recent high-profile breaches, like Nvidia's, and the significant security challenges posed by advanced technologies. DasSarma emphasizes the importance of collaboration in improving security protocols and ensuring safe AI alignment amid evolving threats.
undefined
52 snips
Jan 22, 2025 • 2h 26min

#138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

Sharon Hewitt Rawlette, a philosopher and author, explores the intrinsic values of pleasure and pain. She argues that positive feelings are fundamentally valuable, while suffering holds negative intrinsic worth. The conversation dives into the historical evolution of hedonism, the complexities of moral truths, and the balance between intrinsic and instrumental values. Rawlette also examines how personal experiences shape morality and decision-making, questioning the role of genuine connections beyond mere emotional benefits.
undefined
92 snips
Jan 15, 2025 • 3h 41min

#134 Classic episode – Ian Morris on what big-picture history teaches us

Ian Morris, a bestselling historian and Willard Professor of Classics at Stanford University, dives deep into the evolution of human values over millennia. He discusses how moral landscapes have transformed from slavery being deemed natural to modern views on gender and equality. The conversation covers the interplay of warfare, energy sources, and social dynamics in shaping cultural norms. Morris argues for a provocative view that society evolves towards organizational systems that enhance survival and efficiency, revealing the fascinating parallels between past and present.
undefined
28 snips
Jan 8, 2025 • 2h 48min

#140 Classic episode – Bear Braumoeller on the case that war isn’t in decline

Bear Braumoeller, a noted political science professor, delves into the contentious debate surrounding the decline of war. He argues against the popular notion that warfare is decreasing, citing compelling data and historical analysis. The conversation spans the complexities of modern warfare, the paradox of Enlightenment ideals fueling conflict, and the role of religion in warfare. Braumoeller emphasizes the need for careful interpretation of conflict data and highlights geopolitical tensions, particularly between the US and China, as pressing indicators of potential future conflicts.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode