

80k After Hours
The 80000 Hours team
Resources on how to do good with your career — and anything else we here at 80,000 Hours feel like releasing.
Episodes
Mentioned books

6 snips
Oct 4, 2024 • 23min
Highlights: #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science
Venki Ramakrishnan, a Nobel Prize-winning molecular biologist and author of 'Why We Die', takes a deep dive into anti-aging science. He challenges the idea of death as an unavoidable aspect of evolution, suggesting that future research could significantly extend healthspan. The discussion touches on the potential social implications of life extension, including rising inequality. Ramakrishnan also examines controversial practices like young blood transfusions and the ethical dilemmas surrounding these radical methods.

Sep 30, 2024 • 22min
Highlights: #201 – Ken Goldberg on why your robot butler isn’t here yet
Ken Goldberg, a leading expert in robotics and AI, dives into why we still don't have our robot butlers. He explains Moravec's Paradox, revealing the surprising complexities robots face compared to humans. The conversation touches on the remarkable advancements in drone and quadruped technology, yet emphasizes the ongoing challenges in robot perception and control. Goldberg also discusses how automation could reshape the job market, particularly in sectors requiring high fault tolerance, like surgery and cooking, highlighting the enduring need for human expertise.

Sep 18, 2024 • 23min
Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks
Ezra Karger, an expert on superforecasting and existential risks, dives into the fascinating world of predicting future threats. He discusses why accurate forecasts are crucial for understanding existential risks and highlights the stark disparity between super forecasters and experts on extinction probabilities. The conversation addresses the ongoing disagreements about AI risks and explores how differing worldviews shape these views. Karger emphasizes the practical utility of expert forecasting in navigating these pressing global challenges.

Sep 12, 2024 • 15min
Highlights: #199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy
Nathan Calvin, an expert in AI policy, discusses California's SB 1047 and its potential to reshape AI regulation nationally. He emphasizes why we can't rely on AI companies for self-regulation and the need for proactive state-level policies. Calvin addresses the concerns surrounding open-source models and the implications of liability on innovation. His insights reveal the urgent necessity for a legal framework to manage AI advancements and the significant influence state laws can have on shaping national regulations.

6 snips
Sep 9, 2024 • 24min
Highlights: #198 – Meghan Barrett on challenging our assumptions about insects
Meghan Barrett, an expert in insect behavior and sentience, dives into our often flawed perceptions of insects. She discusses the astonishing diversity in insect sizes, challenging the notion that they are all tiny. The conversation also touches on insect parenting and lifespan, revealing surprising complexities in their reproductive behaviors. Barrett examines the potential for insect pain perception and the evolutionary factors influencing it, ultimately advocating for a more nuanced understanding of insect welfare and consciousness.

Sep 5, 2024 • 22min
Highlights: #197 – Nick Joseph on whether Anthropic’s AI safety policy is up to the task
Nick Joseph, an expert at Anthropic, dives into the intricacies of AI safety policies. He discusses the Responsible Scaling Policy (RSP) and its pivotal role in managing AI risks. Nick expresses his enthusiasm for RSPs but shares concerns about their effectiveness when not fully embraced by teams. He debates the need for wider safety buffers and alternative safety strategies. Additionally, he encourages industry professionals to consider capabilities roles to aid in developing robust safety measures. A thought-provoking chat on securing the future of AI!

Aug 30, 2024 • 26min
Highlights: #196 – Jonathan Birch on the edge cases of sentience and why they matter
Jonathan Birch, an expert in sentience, dives into intriguing discussions on the history of neonatal surgery sans anesthesia and the misconceptions surrounding pain in newborns. He sheds light on the complex link between fetal sentience and abortion, arguing for bodily autonomy in advocacy. The conversation delves into the ethical stakes of neural organoids and the potential for AI sentience through computational emulation. Birch also champions the importance of citizen assemblies in policymaking, highlighting the value of public perspectives in scientific ethics.

Aug 19, 2024 • 18min
Highlights: #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them
Sella Nevo, a frontier AI models expert, delves into the precarious world of AI security. He discusses the critical need to protect model weights and the risks of unauthorized access. Drawing parallels to the notorious SolarWinds hack, he highlights vulnerabilities in machine learning infrastructure. Nevo also sheds light on nation-state threats exploiting weaknesses and the dangers of side-channel attacks. Additionally, he reveals how everyday USB devices can pose significant security risks, even for seasoned users.

5 snips
Aug 12, 2024 • 35min
Highlights: #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government
Vitalik Buterin, a leading voice in blockchain and cryptocurrency, shares his insights on defensive accelerationism and the challenges of AI regulation. He emphasizes the need for democratic governance in tech and explores bio-defense lessons from the pandemic. The conversation touches on community solutions, like Community Notes, and innovative funding through quadratic voting. Buterin also reflects on a philosophy of 'half-assing' life, advocating for moderation over absolute commitment in various aspects of self-improvement.

Jul 31, 2024 • 25min
Highlights: #193 – Sihao Huang on the risk that US–China AI competition leads to war
Sihao Huang, an expert on US-China AI dynamics, explains the fast-paced advancements in Chinese AI and the implications for global safety. He discusses how China’s growing capabilities may lead to competitive tensions, with risks of human rights violations and state control. Huang emphasizes the need for dialogue between the US and China to foster collaboration on AI safety. He warns that unchecked AI development could exacerbate authoritarian regimes, urging a critical examination of how these technologies might reshape governance and democracy.