

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

27 snips
Dec 23, 2025 • 1h 19min
How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)
David Duvenaud, an associate professor at the University of Toronto, dives into the concept of gradual disempowerment in a post-AGI world. He discusses how slow institutional shifts could erode human power while appearing normal. The conversation covers cultural shifts towards AI, the risks of obsolete labor, and the erosion of property rights. Duvenaud also highlights the complexities of aligning AI with human values and the potential for misaligned governance if humans become unnecessary. Engaging and thought-provoking, he tackles the future of human-AI relationships.

36 snips
Dec 12, 2025 • 1h 29min
Why the AI Race Undermines Safety (with Steven Adler)
Stephen Adler, former safety researcher at OpenAI, dives into the intricate challenges of AI governance. He sheds light on the competitive pressures that push labs to release potentially dangerous models too quickly. Exploring the mental health impacts of chatbots, Adler raises critical questions about responsibility for AI-harmed users. He discusses the urgent need for international regulations like the EU AI Act and emphasizes the risks of deploying AIs without thorough safety evaluations, sparking a lively debate on the future of superintelligent systems.

19 snips
Nov 27, 2025 • 1h 1min
Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)
Tyler Johnston, Executive Director of the Midas Project, advocates for AI transparency and accountability. He discusses using animal rights watchdog strategies to hold AI companies accountable. The conversation includes OpenAI's attempts to silence critics through subpoenas and how public pressure can challenge powerful entities. Johnston emphasizes the necessity of transparency where technical safety solutions are lacking and the importance of independent audits for meaningful oversight. His insights illuminate the risks and responsibilities of AI development.

110 snips
Nov 14, 2025 • 2h 3min
We're Not Ready for AGI (with Will MacAskill)
Will MacAskill, a senior research fellow at Forethought and author known for his work on longtermist ethics, dives into the complexities of AI governance. He discusses moral error risks and the challenges of ensuring that AI systems reflect ethical reasoning. The conversation touches on the urgent need for space governance and how AI can reinforce biases through sycophantic behavior. MacAskill also presents the concept of 'viatopia' to emphasize flexibility in future moral choices, highlighting the importance of designing AIs for better moral reflection.

16 snips
Nov 7, 2025 • 1h 8min
What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)
Karl Koch, founder of the AI Whistleblower Initiative, dives into the urgent need for transparency and protections for insiders spotting AI safety risks. He discusses the current gaps in company policies and the critical role whistleblowing plays as a safety net. Koch offers practical steps for potential whistleblowers, emphasizing the importance of legal counsel and anonymity. The conversation also explores the challenges whistleblowers face, particularly as AI evolves rapidly, and how organizational culture needs to adapt to encourage openness.

32 snips
Oct 24, 2025 • 1h 2min
Can Machines Be Truly Creative? (with Maya Ackerman)
Maya Ackerman, an AI researcher and co-founder of WaveAI, dives into the fascinating intersection of creativity and artificial intelligence. She discusses how creativity can be defined as novel and valuable output, highlighting evolution as a creative process. Maya reveals that machine creativity differs from human creativity in speed and emotional context. The conversation touches on the role of AI in enhancing human capabilities rather than replacing them, and reframes hallucination as a vital part of imagination. Explore how AI can elevate our creativity in collaborative ways!

44 snips
Oct 14, 2025 • 47min
From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)
Parmy Olson, a technology columnist at Bloomberg and author of the award-winning book 'Supremacy', shares her insights on AI's evolution from research labs to commercial powerhouses. She discusses the impact of charismatic leaders and funding pressures on company missions, revealing how initial ideals have shifted due to investor demands. Parmy also addresses the human costs of rushed AI deployments, the challenges faced by safety teams, and the role of weak regulatory structures. Her skepticism about utopian AI narratives highlights the urgent need for stronger governance.

15 snips
Oct 3, 2025 • 1h 19min
Can Defense in Depth Work for AI? (with Adam Gleave)
Adam Gleave, co-founder and CEO of FAR.AI and an AI researcher, dives deep into AI safety and alignment challenges. He introduces his three-tier framework for AI capabilities, addressing the risks of gradual disempowerment and discusses the potential of defense-in-depth strategies. Gleave elaborates on the balance between capability and safety, uncovering practical steps to improve alignment and reduce deception. He also highlights FAR.AI's multifaceted approach to AI research, policy advocacy, and innovative hiring strategies.

67 snips
Sep 26, 2025 • 1h 7min
How We Keep Humans in Control of AI (with Beatrice Erkers)
Beatrice Erkers, who leads the Existential Hope program at the Foresight Institute, dives into the intriguing AI Pathways project. She discusses two alternatives to the rush toward AGI: Tool AI, which emphasizes human oversight, and D/ACC, which focuses on decentralized, democratic development. The conversation highlights the trade-offs between safety and speed, the potential benefits of tool AI in areas like science and governance, and how different stakeholders can help shape these safer AI futures.

47 snips
Sep 18, 2025 • 1h 40min
Why Building Superintelligence Means Human Extinction (with Nate Soares)
Nate Soares, President of the Machine Intelligence Research Institute and co-author of "If Anyone Builds It, Everyone Dies," dives into the urgent risks posed by advanced AI systems. He explains how current AI is 'grown, not crafted,' leading to unpredictable behavior. Soares highlights the peril of intelligence threshold effects and the dangers of a failed superintelligence deployment, which lacks second chances. He advocates for an international ban on superintelligence research to mitigate existential risks, stressing that humanity's current responses are insufficient.


