
Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Latest episodes

Apr 11, 2025 • 1h 36min
How Will We Cooperate with AIs? (with Allison Duettmann)
Allison Duettmann, CEO of the Foresight Institute, focuses on decentralized AI and international governance. She discusses the balance between centralized and decentralized AI, exploring how it could shape our future interactions with technology. The conversation delves into historical lessons relevant to AI, the complexities of space law, and whether tech is invented or discovered. Additionally, Duettmann emphasizes the importance of cooperation with AIs and fostering decision-making enhancement for a better world, particularly for the next generation.

Apr 4, 2025 • 1h 13min
Brain-like AGI and why it's Dangerous (with Steven Byrnes)
Steven Byrnes, an AGI safety and alignment researcher at the Astera Institute, explores the intricacies of brain-like AGI. He discusses the differences between controlled AGI and social-instinct AGI, highlighting the relevance of human brain functions in safe AI development. Byrnes emphasizes the importance of aligning AGI motivations with human values, and the need for honesty in AI models. He also shares ways individuals can contribute to enhancing AGI safety and compares various strategies to ensure its benefit to humanity.

10 snips
Mar 28, 2025 • 1h 35min
How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)
Ege Erdil, a senior researcher at Epoch AI, dives deep into the fascinating realm of AI development and the new GATE model. He explores how evolution and brain efficiency shape our understanding of AGI requirements. Ege discusses the economic impacts of AI on labor markets and wages, highlighting which jobs are most vulnerable to automation. The conversation also touches on Moravec’s Paradox and the challenges of training complex AI models with long-term planning capabilities, emphasizing the uncertainty surrounding AI timelines and future advancements.

Mar 21, 2025 • 2h 23min
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
Nicholas Carlini, a security researcher at Google DeepMind, shares his expertise in adversarial machine learning and cybersecurity. He reveals intriguing insights about adversarial attacks on image classifiers and the complexities of defending against them. Carlini discusses the critical role of human intuition in developing defenses, the implications of open-source AI, and the evolving risks associated with model safety. He also explores how advanced techniques expose vulnerabilities in language models and the balance between transparency and security in AI.

Mar 13, 2025 • 1h 21min
Keep the Future Human (with Anthony Aguirre)
In a thought-provoking discussion, Anthony Aguirre, Executive Director of the Future of Life Institute, shares insights on the urgent need for responsible AI development. He emphasizes the rapid approach toward artificial general intelligence (AGI) and its potential to overshadow human roles. The conversation highlights the challenges of regulatory frameworks and the necessity for international cooperation to mitigate risks. Aguirre advocates for a balanced approach, exploring Tool AI instead of AGI, while stressing the significance of aligning AI with human values to ensure a beneficial future.

Mar 6, 2025 • 1h 16min
We Created AI. Why Don't We Understand It? (with Samir Varma)
On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness. You can find out more about Samir's work here: https://samirvarma.com Timestamps: 00:00 AIs with free will? 08:00 Can we predict AI behavior? 11:38 AI psychology 16:24 Which concepts will AIs use? 20:19 Will we collaborate with AIs? 26:16 Will we trade with AIs? 31:40 Training data for robots 34:00 AI in finance 39:55 How much of trading is automated? 49:00 AI in biology and complex systems 59:31 Will our skills atrophy? 01:02:55 Levels of scientific explanation 01:06:12 AIs with emotions and consciousness? 01:12:12 Why can't we predict recessions?

Feb 27, 2025 • 1h 23min
Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)
On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us. We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here: https://palisaderesearch.org/blog/specification-gaming Timestamps: 00:00 The pace of AI progress 04:15 How we might lose control 07:23 Why are AIs sometimes dumb? 12:52 Benchmarks vs real world 19:11 Loss of control scenarios 26:36 Why would AI turn against us? 30:35 AIs hacking chess 36:25 Why didn't more advanced AIs hack? 41:39 Creating honest AIs 49:44 AI attackers vs AI defenders 58:27 How good is security at AI companies? 01:03:37 A sense of urgency 01:10:11 What should we do? 01:15:54 Skepticism about AI progress

Feb 14, 2025 • 46min
Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity
Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts. You can learn more about Ann's work here: https://www.wiseancestors.org Timestamps: 00:00 What is Wise Ancestors? 04:27 Recovering after catastrophes 11:40 Decentralized science 18:28 Upfront benefit-sharing 26:30 Local communities 32:44 Recreating optimal environments 38:57 Cross-cultural collaboration

Jan 24, 2025 • 1h 26min
Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective
Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI. You can learn more about Michael's work here: https://catholic.tech/academics/faculty/michael-baggot Timestamps: 00:00 Meta-narratives and transhumanism 15:28 Advanced AI and religious communities 27:22 Superintelligence 38:31 Countercultures and technology 52:38 Christian perspectives and tradition 01:05:20 God-like artificial intelligence 01:13:15 A positive vision for AI

Jan 9, 2025 • 1h 40min
David Dalrymple on Safeguarded, Transformative AI
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware. You can learn more about David's work at ARIA here: https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/ Timestamps: 00:00 What is Safeguarded AI? 16:28 Implementing Safeguarded AI 22:58 Can we trust Safeguarded AIs? 31:00 Formalizing more of the world 37:34 The performance cost of verified AI 47:58 Changing attitudes towards AI 52:39 Flexible Hardware-Enabled Guarantees 01:24:15 Mind uploading 01:36:14 Lessons from David's early life
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.