

80k After Hours
The 80000 Hours team
Resources on how to do good with your career — and anything else we here at 80,000 Hours feel like releasing.
Episodes
Mentioned books

Jul 22, 2025 • 47min
Highlights: #218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good
For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell-bent on shattering that comfort.But according to Hugh White — one of the world's leading strategic thinkers, emeritus professor at the Australian National University, and author of Hard New World: Our Post-American Future — Trump isn't destroying American hegemony. He's simply revealing that it's already gone.These highlights are from episode #218 of The 80,000 Hours Podcast: Hugh White on why Trump is abandoning US hegemony – and that’s probably good, and include:America has been all talk, no action when it comes to China and Russia (00:39)How Trump has significantly brought forward the inevitable (05:14)Westerners always underestimate what China can achieve (10:32)We live in a multipolar world; we've got to make a multipolar world work (15:47)Trump is half-right that the US was being ripped off (19:06)Europe is strong enough to take on Russia, except it lacks nuclear deterrence (22:27)A multipolar world is bad, but better than the alternative: nuclear war (28:50)Taiwan's position is essentially indefensible — and the rest of the world needs to be honest with them about that (33:24)AGI may or may not overcome existing nuclear deterrence (39:16)These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong

5 snips
Jun 26, 2025 • 41min
Highlights: #217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress
Beth Barnes, CEO of METR, leads the charge in assessing AI models' capabilities and risks. In this intriguing discussion, she reveals that AI can now tackle expert-level tasks in under 30 minutes, a drastic shift from earlier benchmarks. Barnes emphasizes the necessity of rigorous external audits for AI safety, arguing that internal checks alone may not suffice. Excitingly, she forecasts the arrival of recursively self-improving AI in just two years, prompting urgent conversations about accountability and testing before deployment.

7 snips
May 27, 2025 • 31min
Highlights: #216 – Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it
Ian Dunt, a British author and political journalist, delves into the dysfunction of government structures. He highlights how a lack of understanding among ministers and frequent turnover among civil servants hinders effectiveness. Dunt discusses the dangers of expanded delegated legislation and advocates for independent-minded MPs. He proposes reforms for better governance, including proportional representation to enhance democratic integrity. The conversation provides insightful critiques of systemic challenges within British politics.

May 16, 2025 • 37min
Highlights: #215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power
Tom Davidson, a Senior Research Fellow at the Forethought Centre for AI Strategy, delves into the unsettling implications of AI on power dynamics. He discusses how advanced AI could facilitate unprecedented coups by small elites, diminishing democratic oversight. Topics include the potential for military automation, the historical patterns of technology reshaping governance, and the critical need for transparency in AI deployments. Davidson stresses the risks of concentrated control and the importance of ethical considerations in navigating this new landscape.

16 snips
Apr 18, 2025 • 41min
Highlights: #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway
In this enlightening discussion, Buck Shlegeris, CEO of Redwood Research and a pioneer in AI control, dives into the urgent need to manage misaligned AIs. He explains innovative techniques to detect and neutralize harmful behaviors, emphasizing the critical importance of proactive monitoring. The conversation also touches on the tension between corporate ambition and AI safety, exploring whether alignment strategies can truly keep us safe. Shlegeris advocates for small, focused teams to drive change from within the industry.

Apr 1, 2025 • 1h 43min
Off the Clock #8: Leaving Las London with Matt Reardon
In a farewell chat, Matt Reardon, soon to lead the programs team at the Institute for Law and AI, shares his journey from London to the U.S. and Korea. He reflects on valuable lessons learned at 80k and the bittersweet nature of change. The conversation then shifts to navigating the complexities of AI governance, discussing Section 230 and ongoing safety concerns. They also explore the social dynamics of workplace interactions through playful anecdotes from a retreat board game, blending humor with insights on authenticity and career transitions.

35 snips
Mar 25, 2025 • 34min
Highlights: #213 – Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared
Will MacAskill, a philosopher and AI safety researcher at the Forethought Centre, discusses the staggering potential of AI to compress a century's worth of change into a mere decade. He emphasizes the urgent need to prepare for rapid societal shifts and explores what a positive future with AGI might look like. MacAskill raises crucial concerns about the risks of societal lock-in and public distrust in utopian visions. He also delves into the ethical dilemmas surrounding AGI development and its profound impacts on governance and social values.

6 snips
Mar 12, 2025 • 29min
Highlights: #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway
Allan Dafoe, Director of Frontier Safety and Governance at Google DeepMind, dives into the unstoppable nature of technology and its historical patterns. He explains how societies embracing new capabilities outpace those that resist. The discussion highlights the balance of offense and defense in technology, the complexities of AI cooperation, and the potential risks of backdoor technology in AI models. Dafoe also reflects on how human agency shapes tech development, emphasizing the need for ethical decision-making in AI's future.

14 snips
Jan 13, 2025 • 1h 24min
Off the Clock #7: Getting on the Crazy Train with Chi Nguyen
Watch this episode on YouTube! https://youtu.be/IRRwHCK279EMatt, Bella, and Huon sit down with Chi Nguyen to discuss cooperating with aliens, elections of future past, and Bad Billionaires pt. 2.Check out: Matt’s summer appearance on the BBC on funding for the artsChi’s ECL Explainer (get in touch to support!)

34 snips
Jan 6, 2025 • 1h 1min
Highlights: #211 – Sam Bowman on why housing still isn’t fixed and what would actually work
Sam Bowman, an economist and editor of Works in Progress, dives into the housing crisis in developed countries, emphasizing the powerful grip of NIMBYism. He presents innovative solutions like street votes to empower local residents and discusses property tax distribution's significant role. Sam argues that overcoming NIMBY opposition requires rethinking incentives and highlights the surprising local support for nuclear power. He also touches on the intersection of technology in public health, especially concerning obesity and food choices.