80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
101 snips
Jun 2, 2025 • 3h 47min

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

Beth Barnes, CEO of METR, dives into the remarkable advancements in AI capabilities, noting that models now have a 50% success rate in tackling complex tasks originally designed for expert humans. She reveals the staggering trend of AI's planning horizon doubling every seven months. Beth emphasizes that AI could soon self-improve, potentially within two years. The conversation also critiques the urgency of addressing AI safety and regulatory challenges as technology evolves, urging a proactive approach while acknowledging the societal implications of AI advancements.
undefined
112 snips
May 23, 2025 • 3h 35min

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

Megan Barrett, an insect neurobiologist, discusses the evolutionary case for insect sentience. Jeff Sebo, specializing in ethics, explores moral considerations for AI systems. David Chalmers contemplates the feasibility of artificial consciousness, while Bob Fischer examines the moral weight of animals like chickens. Cameron Meyer Shorb highlights the suffering of wild animals, and Jonathan Birch warns about the nuances of newborn pain. The conversation challenges our understanding of consciousness across species and prompts deep questions about our moral responsibilities.
undefined
34 snips
May 15, 2025 • 1h 12min

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

Tyler Whitmer, a litigator and coauthor of a letter to state Attorneys General regarding OpenAI's corporate restructuring, discusses the alarming shift of OpenAI from nonprofit to Public Benefit Corporation (PBC). He argues this transition could undermine the nonprofit's ability to prioritize safety over profit in AI development. Key points include the lack of accountability for PBCs, regulatory challenges, and the legal complexities of maintaining public benefit in a profit-driven landscape. Tyler emphasizes the urgent need for transparency and oversight to protect humanity's interests.
undefined
245 snips
May 12, 2025 • 1h

The case for and against AGI by 2030 (article by Benjamin Todd)

In this illuminating discussion, Benjamin Todd, a writer focused on AGI since 2014, breaks down the trends shaping the future of AI. He explores four key drivers of AI progress, including enhanced reasoning capabilities and the growing computational power fueling development. Todd contrasts the optimistic scenarios where AGI could emerge by 2030 and revolutionize industries like software and research with the challenges that might hinder such advancements. It's a thoughtful examination of the promising yet complex road ahead for artificial intelligence.
undefined
67 snips
May 8, 2025 • 1h 3min

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

Rose Chan Loui, a nonprofit law expert at UCLA, joins the discussion about OpenAI's recent governance shift from a nonprofit to a public benefit corporation. She highlights the significance of the attorneys general's intervention and what it means for the nonprofit's control over safety decisions. Chan Loui stresses that while the words of change sound promising, their effectiveness hinges on practical enforcement and clarity. The episode also explores broader governance challenges, ethical AI development, and the potential pitfalls of corporate influence on nonprofit missions.
undefined
214 snips
May 2, 2025 • 3h 15min

#216 – Ian Dunt on why governments in Britain and elsewhere can't get anything done – and how to fix it

Ian Dunt, a British author and political journalist, dives into the systemic failures of the UK government. He highlights how clueless ministers and constant civil service turnover create chaos and inefficiency. Dunt critiques the outdated systems and physical spaces like 10 Downing Street that hinder decision-making. He discusses the disconnect in MP selection, policies crafted in haste, and the need for reforms focusing on expertise over political loyalty. His insights suggest that successful governance requires both structural changes and accountability.
undefined
277 snips
Apr 24, 2025 • 2h 19min

Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests

How do you navigate a career path when the future of work is uncertain? How important is mentorship versus immediate impact? Is it better to focus on your strengths or on the world’s most pressing problems? Should you specialise deeply or develop a unique combination of skills?From embracing failure to finding unlikely allies, we bring you 16 diverse perspectives from past guests who’ve found unconventional paths to impact and helped others do the same.Links to learn more and full transcript.Chapters:Cold open (00:00:00)Luisa's intro (00:01:04)Holden Karnofsky on just kicking ass at whatever (00:02:53)Jeff Sebo on what improv comedy can teach us about doing good in the world (00:12:23)Dean Spears on being open to randomness and serendipity (00:19:26)Michael Webb on how to think about career planning given the rapid developments in AI (00:21:17)Michelle Hutchinson on finding what motivates you and reaching out to people for help (00:41:10)Benjamin Todd on figuring out if a career path is a good fit for you (00:46:03)Chris Olah on the value of unusual combinations of skills (00:50:23)Holden Karnofsky on deciding which weird ideas are worth betting on (00:58:03)Karen Levy on travelling to learn about yourself (01:03:10)Leah Garcés on finding common ground with unlikely allies (01:06:53)Spencer Greenberg on recognising toxic people who could derail your career and life (01:13:34)Holden Karnofsky on the many jobs that can help with AI (01:23:13)Danny Hernandez on using world events to trigger you to work on something else (01:30:46)Sarah Eustis-Guthrie on exploring and pivoting in careers (01:33:07)Benjamin Todd on making tough career decisions (01:38:36)Hannah Ritchie on being selective when following others’ advice (01:44:22)Alex Lawsen on getting good mentorship (01:47:25)Chris Olah on cold emailing that actually works (01:54:49)Pardis Sabeti on prioritising physical health to do your best work (01:58:34)Chris Olah on developing good taste and technique as a researcher (02:04:39)Benjamin Todd on why it’s so important to apply to loads of jobs (02:09:52)Varsha Venugopal on embracing uncomfortable situations and celebrating failures (02:14:25)Luisa's outro (02:17:43)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
undefined
58 snips
Apr 16, 2025 • 3h 23min

#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

Tom Davidson, a researcher at the Forethought Centre for AI Strategy in Oxford, dives into the chilling potential of AI to facilitate power grabs by small, organized groups. He discusses how AI advancements could empower military coups and autocratic rule by minimizing the need for public participation. Davidson warns of 'secret loyalties' in AI systems that might enable tyranny. The conversation highlights urgent ethical implications for democracy and underscores the necessity of transparency in developing AI technologies.
undefined
94 snips
Apr 11, 2025 • 1h 47min

Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

This conversation features Tim LeBon, a perfectionism therapist; Hannah Ritchie, a data researcher; Christian Ruhl, a grantmaker and stutterer; Will MacAskill, a moral philosopher; and Ajeya Cotra, a grant maker addressing research challenges. They discuss how moral perfectionism can harm self-identity, the toll of imposter syndrome in high-stakes environments, and the necessity of self-acceptance. They share personal struggles with guilt, anxiety, and the balance between making an impact while maintaining mental well-being. Their insights offer a roadmap for navigating emotional barriers in the pursuit of doing good.
undefined
127 snips
Apr 4, 2025 • 2h 16min

#214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway

Buck Shlegeris, CEO of Redwood Research, dives into the crucial topic of AI control mechanisms to mitigate risks of misalignment. He shares insights on developing safety protocols for advanced AIs that could potentially act against human interests. Shlegeris emphasizes actionable strategies that aren't as complex as they seem. The discussion highlights the urgent need for robust safeguards in AI deployment and the ethical implications of misaligned systems. He also explores the challenges of monitoring AI, underscoring a proactive approach to ensure safety and trust.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app