

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Sep 26, 2025 • 1h 7min
How We Keep Humans in Control of AI (with Beatrice Erkers)
Beatrice Erkers, who leads the Existential Hope program at the Foresight Institute, dives into the intriguing AI Pathways project. She discusses two alternatives to the rush toward AGI: Tool AI, which emphasizes human oversight, and D/ACC, which focuses on decentralized, democratic development. The conversation highlights the trade-offs between safety and speed, the potential benefits of tool AI in areas like science and governance, and how different stakeholders can help shape these safer AI futures.

15 snips
Sep 18, 2025 • 1h 40min
Why Building Superintelligence Means Human Extinction (with Nate Soares)
Nate Soares, President of the Machine Intelligence Research Institute and co-author of "If Anyone Builds It, Everyone Dies," dives into the urgent risks posed by advanced AI systems. He explains how current AI is 'grown, not crafted,' leading to unpredictable behavior. Soares highlights the peril of intelligence threshold effects and the dangers of a failed superintelligence deployment, which lacks second chances. He advocates for an international ban on superintelligence research to mitigate existential risks, stressing that humanity's current responses are insufficient.

21 snips
Sep 10, 2025 • 1h 10min
Breaking the Intelligence Curse (with Luke Drago)
Luke Drago, co-founder of Workshop Labs and co-author of 'The Intelligence Curse' essay series, dives deep into AI's economic implications. He discusses the potential for AI to diminish investment in human talent, exploring concepts like pyramid replacement in firms and socioeconomic changes. Drago highlights privacy risks associated with AI data training and the balance needed between centralized safety measures and democratization. He also emphasizes the importance of embracing career risks during this technological transition, making for a thought-provoking conversation.

90 snips
Sep 1, 2025 • 1h 36min
What Markets Tell Us About AI Timelines (with Basil Halperin)
Basil Halperin, an assistant professor of economics at the University of Virginia, dives into the intriguing world of economic indicators and their implications for AI timelines. He discusses how rising interest rates may signal market expectations of transformative AI and the complexities between strong AI benchmarks and real economic impact. The conversation also touches on market efficiency, the role of financial institutions in shaping perceptions of AI, and the potential wealth concentration effects due to advancements in AI technology.

52 snips
Aug 22, 2025 • 1h 18min
AGI Security: How We Defend the Future (with Esben Kran)
Esben Kran, Co-director of Apart Research, dives into the critical topic of AGI security, emphasizing the need for new defenses beyond traditional cybersecurity. He discusses adaptive malware called 'scentware' and the complexities of ensuring safe AI communications. Kran highlights societal shifts necessary for resilient security and argues for decentralized safety models across organizations. His insights on oversight without surveillance and the potential threats from misaligned AI reveal the urgent need for innovative governance in the age of advanced technology.

123 snips
Aug 15, 2025 • 1h 27min
Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)
Benjamin Todd, a writer and founder of 80,000 Hours, shares insights on AGI and societal readiness. He discusses the transformative power of reasoning models in AI and the potential impact of feedback loops on economies. Todd explores the scalability of robotics and its challenges, including job displacement concerns. He emphasizes the importance of personal preparation, from honing valuable skills to saving strategically. The conversation highlights how society needs to adapt and engage with AI advancements responsibly.

14 snips
Jul 31, 2025 • 1h 37min
From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)
Calum Chace, a prominent author and AI safety advocate, dives into the transformative impact of AI on our jobs and economy. He discusses the 'peak horse' analogy to highlight concerns over human obsolescence and explores the idea of universal basic income as a counterbalance to automation. Chace also reimagines education through personalized AI tutors and examines the ethical challenges of AI consciousness. His insights offer a roadmap for navigating the complex future of work and wealth distribution in an AI-driven world.

33 snips
Jul 17, 2025 • 1h 54min
How AI Could Help Overthrow Governments (with Tom Davidson)
Tom Davidson, a senior research fellow at Forethought, dives into the alarming prospect of AI-enabled coups. He discusses how advanced AI could empower covert actors to seize power and what capabilities these AIs would need for political maneuvers. The conversation highlights the unique risks of military automation and secret loyalties within organizations. Davidson outlines strategies to mitigate these emerging threats, stressing the need for transparency and regulatory frameworks to safeguard democracy against AI's influence.

112 snips
Jul 11, 2025 • 1h 45min
What Happens After Superintelligence? (with Anders Sandberg)
Anders Sandberg, a futurist and philosopher at Oxford's Future of Humanity Institute, dives into the complex implications of superintelligence. He discusses how this technology might reshape human psychology and governance, potentially leading to a post-scarcity society focused on happiness rather than wealth. Sandberg highlights the environmental challenges posed by AI, including energy demands and ecological impacts. He wraps up by addressing the intricacies of designing dependable AI systems amid rapid changes, emphasizing the balance between predictability and reliability.

95 snips
Jul 3, 2025 • 1h 10min
Why the AI Race Ends in Disaster (with Daniel Kokotajlo)
Daniel Kokotajlo, an expert in AI governance at AI-2027 and AI-Futures, discusses the groundbreaking potential of AI and its ability to outpace the Industrial Revolution. He highlights the risks of AI-driven automated coding and the necessity for transparency in AI development. The conversation also delves into the future of AI communication and the inherent risks associated with superintelligence. Additionally, Kokotajlo examines the importance of iterative forecasting in navigating the uncertainties of AI's trajectory.