80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
23 snips
Apr 16, 2025 • 3h 23min

#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

Tom Davidson, a researcher at the Forethought Centre for AI Strategy in Oxford, dives into the chilling potential of AI to facilitate power grabs by small, organized groups. He discusses how AI advancements could empower military coups and autocratic rule by minimizing the need for public participation. Davidson warns of 'secret loyalties' in AI systems that might enable tyranny. The conversation highlights urgent ethical implications for democracy and underscores the necessity of transparency in developing AI technologies.
undefined
88 snips
Apr 11, 2025 • 1h 47min

Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

This conversation features Tim LeBon, a perfectionism therapist; Hannah Ritchie, a data researcher; Christian Ruhl, a grantmaker and stutterer; Will MacAskill, a moral philosopher; and Ajeya Cotra, a grant maker addressing research challenges. They discuss how moral perfectionism can harm self-identity, the toll of imposter syndrome in high-stakes environments, and the necessity of self-acceptance. They share personal struggles with guilt, anxiety, and the balance between making an impact while maintaining mental well-being. Their insights offer a roadmap for navigating emotional barriers in the pursuit of doing good.
undefined
69 snips
Apr 4, 2025 • 2h 16min

#214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway

Buck Shlegeris, CEO of Redwood Research, dives into the crucial topic of AI control mechanisms to mitigate risks of misalignment. He shares insights on developing safety protocols for advanced AIs that could potentially act against human interests. Shlegeris emphasizes actionable strategies that aren't as complex as they seem. The discussion highlights the urgent need for robust safeguards in AI deployment and the ethical implications of misaligned systems. He also explores the challenges of monitoring AI, underscoring a proactive approach to ensure safety and trust.
undefined
161 snips
Mar 28, 2025 • 2h 36min

15 expert takes on infosec in the age of AI

"There’s almost no story of the future going well that doesn’t have a part that’s like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of information security: 'You’re training a powerful AI system; you should make it hard for someone to steal' has popped out to me as a thing that just keeps coming up in these stories, keeps being present. It’s hard to tell a story where it’s not a factor. It’s easy to tell a story where it is a factor." — Holden KarnofskyWhat happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse?With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn’t yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents. You’ll hear:Holden Karnofsky on why every good future relies on strong infosec, and how hard it’s been to hire security experts (from episode #158)Tantum Collins on why infosec might be the rare issue everyone agrees on (episode #166)Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security (episode #197)Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs (episode #195)Kevin Esvelt on what cryptographers can teach biosecurity experts (episode #164)Lennart Heim on on Rob’s computer security nightmares (episode #155)Zvi Mowshowitz on the insane lack of security mindset at some AI companies (episode #184)Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity (episode #132)Bruce Schneier on whether AI could eliminate software bugs for good, and why it’s bad to hook everything up to the internet (episode #64)Nita Farahany on the dystopian risks of hacked neurotech (episode #174)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (episode #194)Nathan Labenz on how even internal teams at AI companies may not know what they’re building (episode #176)Allan Dafoe on backdooring your own AI to prevent theft (episode #212)Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!)Carl Shulman on the challenge of trusting foreign AI models (episode #191, part 2)Plus lots of concrete advice on how to get into this field and find your fitCheck out the full transcript on the 80,000 Hours website.Chapters:Cold open (00:00:00)Rob's intro (00:00:49)Holden Karnofsky on why infosec could be the issue on which the future of humanity pivots (00:03:21)Tantum Collins on why infosec is a rare AI issue that unifies everyone (00:12:39)Nick Joseph on whether the current state of information security makes it impossible to responsibly train AGI (00:16:23)Nova DasSarma on the best available defences against well-funded adversaries (00:22:10)Sella Nevo on why AI model weights are so valuable to steal (00:28:56)Kevin Esvelt on what cryptographers can teach biosecurity experts (00:32:24)Lennart Heim on the possibility of an autonomously replicating AI computer worm (00:34:56)Zvi Mowshowitz on the absurd lack of security mindset at some AI companies (00:48:22)Sella Nevo on the weaknesses of air-gapped networks and the risks of USB devices (00:49:54)Bruce Schneier on why it’s bad to hook everything up to the internet (00:55:54)Nita Farahany on the possibility of hacking neural implants (01:04:47)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (01:10:48)Nova DasSarma on exciting progress in information security (01:19:28)Nathan Labenz on how even internal teams at AI companies may not know what they’re building (01:30:47)Allan Dafoe on backdooring your own AI to prevent someone else from stealing it (01:33:51)Tom Davidson on how dangerous “secret loyalties” in AI models could get (01:35:57)Carl Shulman on whether we should be worried about backdoors as governments adopt AI technology (01:52:45)Nova DasSarma on politically motivated cyberattacks (02:03:44)Bruce Schneier on the day-to-day benefits of improved security and recognising that there’s never zero risk (02:07:27)Holden Karnofsky on why it’s so hard to hire security people despite the massive need (02:13:59)Nova DasSarma on practical steps to getting into this field (02:16:37)Bruce Schneier on finding your personal fit in a range of security careers (02:24:42)Rob's outro (02:34:46)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
undefined
563 snips
Mar 11, 2025 • 3h 58min

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

Will MacAskill, a philosopher and AI strategy researcher, discusses the potentially explosive advancements in AI that could compress a century's worth of change into just ten years. He elaborates on the implications of AI surpassing human capabilities, leading to rapid scientific progress and the urgency of adapting societal institutions. MacAskill also highlights the critical risks associated with AI, such as power concentration and ethical dilemmas, calling for proactive governance and alignment with human values to navigate this unprecedented transformation.
undefined
18 snips
Mar 7, 2025 • 37min

Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)

In this engaging discussion, guest Rose Chan Loui, a nonprofit legal expert from UCLA, unpacks the legal implications surrounding OpenAI’s shift to for-profit. She highlights how a recent court order sets a potential legal minefield for the transition, particularly amidst Elon Musk's concerns. Loui explains the public's stake in this fight, with state attorneys general poised to challenge the conversion based on the original charitable promises of the organization. Tune in to hear about the future of AI ethics and governance amid these complex legal battles.
undefined
82 snips
Feb 25, 2025 • 3h 42min

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

Alan Hájek, a Professor of Philosophy at the Australian National University, shares his expertise on the perplexities of probability and decision-making. He dives deep into the St. Petersburg paradox, questioning the logic behind infinite expected value despite finite outcomes. The conversation also touches on philosophical methods, the significance of counterfactuals in understanding decisions, and the challenges of assigning probabilities to unprecedented events. Join this intriguing exploration of common sense versus philosophical reasoning.
undefined
Feb 19, 2025 • 2h 41min

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

Jeffrey Lewis, an expert on nuclear weapons and founder of Arms Control Wonk, debunks common misconceptions about U.S. nuclear policy. He reveals that the principle of 'mutually assured destruction' was misinterpreted and critiques how military plans suggest the U.S. aims to dominate in nuclear conflicts. Lewis also discusses the complexities of decision-making in nuclear strategy and the persistent misunderstandings stemming from rigid communication within the nuclear community, emphasizing the need for international cooperation to mitigate existential threats.
undefined
247 snips
Feb 14, 2025 • 2h 44min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Allan Dafoe, Director of Frontier Safety and Governance at Google DeepMind, dives into the unstoppable growth of technology, illustrating how societies that adopt new capabilities often outpace those that resist. He discusses the historical context of Japan's Meiji Restoration as a case study of technological competition. Dafoe highlights the importance of steering AI development responsibly, addressing safety challenges, and fostering cooperation among AI systems. He emphasizes the balance between AI innovation and necessary governance to prevent potential risks, urging collective accountability.
undefined
27 snips
Feb 12, 2025 • 57min

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

In this discussion, Rose Chan Loui, the founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits, dives into Elon Musk's audacious $97.4 billion bid for OpenAI. She explains the legal and ethical challenges facing OpenAI's nonprofit board amidst this pressure. The conversation highlights the complexities of balancing charitable missions with investor interests, the implications of nonprofit-to-profit transitions, and the broader societal responsibilities tied to artificial intelligence development.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner