80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
82 snips
Mar 28, 2025 • 2h 36min

15 expert takes on infosec in the age of AI

"There’s almost no story of the future going well that doesn’t have a part that’s like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of information security: 'You’re training a powerful AI system; you should make it hard for someone to steal' has popped out to me as a thing that just keeps coming up in these stories, keeps being present. It’s hard to tell a story where it’s not a factor. It’s easy to tell a story where it is a factor." — Holden KarnofskyWhat happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse?With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn’t yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents. You’ll hear:Holden Karnofsky on why every good future relies on strong infosec, and how hard it’s been to hire security experts (from episode #158)Tantum Collins on why infosec might be the rare issue everyone agrees on (episode #166)Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security (episode #197)Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs (episode #195)Kevin Esvelt on what cryptographers can teach biosecurity experts (episode #164)Lennart Heim on on Rob’s computer security nightmares (episode #155)Zvi Mowshowitz on the insane lack of security mindset at some AI companies (episode #184)Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity (episode #132)Bruce Schneier on whether AI could eliminate software bugs for good, and why it’s bad to hook everything up to the internet (episode #64)Nita Farahany on the dystopian risks of hacked neurotech (episode #174)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (episode #194)Nathan Labenz on how even internal teams at AI companies may not know what they’re building (episode #176)Allan Dafoe on backdooring your own AI to prevent theft (episode #212)Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!)Carl Shulman on the challenge of trusting foreign AI models (episode #191, part 2)Plus lots of concrete advice on how to get into this field and find your fitCheck out the full transcript on the 80,000 Hours website.Chapters:Cold open (00:00:00)Rob's intro (00:00:49)Holden Karnofsky on why infosec could be the issue on which the future of humanity pivots (00:03:21)Tantum Collins on why infosec is a rare AI issue that unifies everyone (00:12:39)Nick Joseph on whether the current state of information security makes it impossible to responsibly train AGI (00:16:23)Nova DasSarma on the best available defences against well-funded adversaries (00:22:10)Sella Nevo on why AI model weights are so valuable to steal (00:28:56)Kevin Esvelt on what cryptographers can teach biosecurity experts (00:32:24)Lennart Heim on the possibility of an autonomously replicating AI computer worm (00:34:56)Zvi Mowshowitz on the absurd lack of security mindset at some AI companies (00:48:22)Sella Nevo on the weaknesses of air-gapped networks and the risks of USB devices (00:49:54)Bruce Schneier on why it’s bad to hook everything up to the internet (00:55:54)Nita Farahany on the possibility of hacking neural implants (01:04:47)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (01:10:48)Nova DasSarma on exciting progress in information security (01:19:28)Nathan Labenz on how even internal teams at AI companies may not know what they’re building (01:30:47)Allan Dafoe on backdooring your own AI to prevent someone else from stealing it (01:33:51)Tom Davidson on how dangerous “secret loyalties” in AI models could get (01:35:57)Carl Shulman on whether we should be worried about backdoors as governments adopt AI technology (01:52:45)Nova DasSarma on politically motivated cyberattacks (02:03:44)Bruce Schneier on the day-to-day benefits of improved security and recognising that there’s never zero risk (02:07:27)Holden Karnofsky on why it’s so hard to hire security people despite the massive need (02:13:59)Nova DasSarma on practical steps to getting into this field (02:16:37)Bruce Schneier on finding your personal fit in a range of security careers (02:24:42)Rob's outro (02:34:46)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
undefined
552 snips
Mar 11, 2025 • 3h 58min

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

Will MacAskill, a philosopher and AI strategy researcher, discusses the potentially explosive advancements in AI that could compress a century's worth of change into just ten years. He elaborates on the implications of AI surpassing human capabilities, leading to rapid scientific progress and the urgency of adapting societal institutions. MacAskill also highlights the critical risks associated with AI, such as power concentration and ethical dilemmas, calling for proactive governance and alignment with human values to navigate this unprecedented transformation.
undefined
18 snips
Mar 7, 2025 • 37min

Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)

In this engaging discussion, guest Rose Chan Loui, a nonprofit legal expert from UCLA, unpacks the legal implications surrounding OpenAI’s shift to for-profit. She highlights how a recent court order sets a potential legal minefield for the transition, particularly amidst Elon Musk's concerns. Loui explains the public's stake in this fight, with state attorneys general poised to challenge the conversion based on the original charitable promises of the organization. Tune in to hear about the future of AI ethics and governance amid these complex legal battles.
undefined
82 snips
Feb 25, 2025 • 3h 42min

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

Alan Hájek, a Professor of Philosophy at the Australian National University, shares his expertise on the perplexities of probability and decision-making. He dives deep into the St. Petersburg paradox, questioning the logic behind infinite expected value despite finite outcomes. The conversation also touches on philosophical methods, the significance of counterfactuals in understanding decisions, and the challenges of assigning probabilities to unprecedented events. Join this intriguing exploration of common sense versus philosophical reasoning.
undefined
Feb 19, 2025 • 2h 41min

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

Jeffrey Lewis, an expert on nuclear weapons and founder of Arms Control Wonk, debunks common misconceptions about U.S. nuclear policy. He reveals that the principle of 'mutually assured destruction' was misinterpreted and critiques how military plans suggest the U.S. aims to dominate in nuclear conflicts. Lewis also discusses the complexities of decision-making in nuclear strategy and the persistent misunderstandings stemming from rigid communication within the nuclear community, emphasizing the need for international cooperation to mitigate existential threats.
undefined
239 snips
Feb 14, 2025 • 2h 44min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Allan Dafoe, Director of Frontier Safety and Governance at Google DeepMind, dives into the unstoppable growth of technology, illustrating how societies that adopt new capabilities often outpace those that resist. He discusses the historical context of Japan's Meiji Restoration as a case study of technological competition. Dafoe highlights the importance of steering AI development responsibly, addressing safety challenges, and fostering cooperation among AI systems. He emphasizes the balance between AI innovation and necessary governance to prevent potential risks, urging collective accountability.
undefined
27 snips
Feb 12, 2025 • 57min

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

In this discussion, Rose Chan Loui, the founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits, dives into Elon Musk's audacious $97.4 billion bid for OpenAI. She explains the legal and ethical challenges facing OpenAI's nonprofit board amidst this pressure. The conversation highlights the complexities of balancing charitable missions with investor interests, the implications of nonprofit-to-profit transitions, and the broader societal responsibilities tied to artificial intelligence development.
undefined
86 snips
Feb 10, 2025 • 3h 12min

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.Check out the full transcript on the 80,000 Hours website.You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:Ajeya Cotra on overrated AGI worriesHolden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models biggerIan Morris on why the future must be radically different from the presentNick Joseph on whether his companies internal safety policies are enoughRichard Ngo on what everyone gets wrong about how ML models workTom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’tCarl Shulman on why you’ll prefer robot nannies over human onesZvi Mowshowitz on why he’s against working at AI companies except in some safety rolesHugo Mercier on why even superhuman AGI won’t be that persuasiveRob Long on the case for and against digital sentienceAnil Seth on why he thinks consciousness is probably biologicalLewis Bollard on whether AI advances will help or hurt nonhuman animalsRohin Shah on whether humanity’s work ends at the point it creates AGIAnd of course, Rob and Luisa also regularly chime in on what they agree and disagree with.Chapters:Cold open (00:00:00)Rob's intro (00:00:58)Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16)Rob & Luisa: Agentic AI and designing machine people (00:24:06)Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20)Ian Morris on why we won’t end up living like The Jetsons (00:47:03)Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21)Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43)Richard Ngo on the most important misconception in how ML models work (01:03:10)Rob & Luisa: Issues Rob is less worried about now (01:07:22)Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)Michael Webb on why he’s sceptical about explosive economic growth (01:20:50)Carl Shulman on why people will prefer robot nannies over humans (01:28:25)Rob & Luisa: Should we expect AI-related job loss? (01:36:19)Zvi Mowshowitz on why he thinks it’s a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)Holden Karnofsky on the power that comes from just making models bigger (01:45:21)Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)Hugo Mercier on how AI won’t cause misinformation pandemonium (01:58:29)Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)Robert Long on whether digital sentience is possible (02:15:09)Anil Seth on why he believes in the biological basis of consciousness (02:27:21)Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)Rob & Luisa: The most interesting new argument Rob’s heard this year (02:50:37)Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)Rob's outro (03:11:02)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore
undefined
16 snips
Feb 7, 2025 • 3h 10min

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

Karen Levy, a seasoned expert in global health and development, discusses the pitfalls of overly fashionable concepts like 'sustainability' and 'holistic approaches' in development projects. She critiques the misguided focus on these terms, arguing that they can lead to ineffective solutions. Levy highlights the successful scaling of deworming initiatives, sharing insights on the challenges faced in implementation and funding. Through her experience with community engagement in Kenya, she emphasizes the need for realistic, evidence-based approaches to bring about meaningful change.
undefined
10 snips
Feb 4, 2025 • 1h 15min

If digital minds could suffer, how would we ever know? (Article)

The podcast dives into the intriguing debate over the moral status of AI and whether digital minds can truly experience sentience. It contrasts perspectives from experts addressing the ethical implications of creating conscious AI. The discussion raises essential questions about responsibility towards potential AI welfare and the risks of misunderstanding their capacities. The need for research into assessing AI's moral status emerges as a critical theme, highlighting both the potential risks and benefits of advancing AI technology.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode