

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Feb 6, 2026 • 1h 47min
Can AI Do Our Alignment Homework? (with Ryan Kidd)
Ryan Kidd, co-executive director at MATS who builds AI safety talent pipelines and mentors on interpretability and governance. He discusses AGI timelines and preparing for nearer-term risks. They cover model deception, evaluation and monitoring, tradeoffs between safety work and capabilities, and what MATS looks for in applicants and researchers.

54 snips
Jan 27, 2026 • 1h 5min
How to Rebuild the Social Contract After AGI (with Deric Cheng)
Deric Cheng, Director of Research at the Windfall Trust and lead of the AGI Social Contract consortium, explores how frontier AI could concentrate corporate power and reshape labor. He outlines resilient job types, taxation and welfare options, land and consumption taxes, and a phased policy roadmap to decouple economic security from work. The conversation surveys global coordination and practical reforms without diving into technical solutions.

85 snips
Jan 20, 2026 • 1h 18min
How AI Can Help Humanity Reason Better (with Oly Sourbut)
Oly Sourbut, a researcher at the Future of Life Foundation, discusses innovative ways AI can enhance human reasoning and decision-making. He delves into community-driven fact-checking and the importance of keeping humans central in AI systems. The conversation covers tools for scenario planning and risk assessment while emphasizing the need for epistemic virtues in AI models. Oly also raises concerns about skill atrophy from over-reliance on AI and imagines a future where AI empowers more deliberate, aligned decision-making.

59 snips
Jan 7, 2026 • 1h 20min
How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)
Nora Ammann, a technical specialist at the UK’s ARIA focusing on AI safety, discusses crucial strategies for mitigating AI risks. She highlights the dangers of rogue AI dominance and chaotic competition, emphasizing the need for early interventions. Nora proposes human-AI coalitions to foster cooperative developments and scalable oversight. She explores the significance of using formal guarantees to enhance AI resilience and safety. Additionally, she examines the complexities of agent collaboration and the role of AI in improving cybersecurity.

68 snips
Dec 23, 2025 • 1h 19min
How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)
David Duvenaud, an associate professor at the University of Toronto, dives into the concept of gradual disempowerment in a post-AGI world. He discusses how slow institutional shifts could erode human power while appearing normal. The conversation covers cultural shifts towards AI, the risks of obsolete labor, and the erosion of property rights. Duvenaud also highlights the complexities of aligning AI with human values and the potential for misaligned governance if humans become unnecessary. Engaging and thought-provoking, he tackles the future of human-AI relationships.

36 snips
Dec 12, 2025 • 1h 29min
Why the AI Race Undermines Safety (with Steven Adler)
Stephen Adler, former safety researcher at OpenAI, dives into the intricate challenges of AI governance. He sheds light on the competitive pressures that push labs to release potentially dangerous models too quickly. Exploring the mental health impacts of chatbots, Adler raises critical questions about responsibility for AI-harmed users. He discusses the urgent need for international regulations like the EU AI Act and emphasizes the risks of deploying AIs without thorough safety evaluations, sparking a lively debate on the future of superintelligent systems.

19 snips
Nov 27, 2025 • 1h 1min
Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)
Tyler Johnston, Executive Director of the Midas Project, advocates for AI transparency and accountability. He discusses using animal rights watchdog strategies to hold AI companies accountable. The conversation includes OpenAI's attempts to silence critics through subpoenas and how public pressure can challenge powerful entities. Johnston emphasizes the necessity of transparency where technical safety solutions are lacking and the importance of independent audits for meaningful oversight. His insights illuminate the risks and responsibilities of AI development.

110 snips
Nov 14, 2025 • 2h 3min
We're Not Ready for AGI (with Will MacAskill)
Will MacAskill, a senior research fellow at Forethought and author known for his work on longtermist ethics, dives into the complexities of AI governance. He discusses moral error risks and the challenges of ensuring that AI systems reflect ethical reasoning. The conversation touches on the urgent need for space governance and how AI can reinforce biases through sycophantic behavior. MacAskill also presents the concept of 'viatopia' to emphasize flexibility in future moral choices, highlighting the importance of designing AIs for better moral reflection.

16 snips
Nov 7, 2025 • 1h 8min
What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)
Karl Koch, founder of the AI Whistleblower Initiative, dives into the urgent need for transparency and protections for insiders spotting AI safety risks. He discusses the current gaps in company policies and the critical role whistleblowing plays as a safety net. Koch offers practical steps for potential whistleblowers, emphasizing the importance of legal counsel and anonymity. The conversation also explores the challenges whistleblowers face, particularly as AI evolves rapidly, and how organizational culture needs to adapt to encourage openness.

32 snips
Oct 24, 2025 • 1h 2min
Can Machines Be Truly Creative? (with Maya Ackerman)
Maya Ackerman, an AI researcher and co-founder of WaveAI, dives into the fascinating intersection of creativity and artificial intelligence. She discusses how creativity can be defined as novel and valuable output, highlighting evolution as a creative process. Maya reveals that machine creativity differs from human creativity in speed and emotional context. The conversation touches on the role of AI in enhancing human capabilities rather than replacing them, and reframes hallucination as a vital part of imagination. Explore how AI can elevate our creativity in collaborative ways!


