
Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Latest episodes

Mar 6, 2025 • 1h 16min
We Created AI. Why Don't We Understand It? (with Samir Varma)
On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness. You can find out more about Samir's work here: https://samirvarma.com Timestamps: 00:00 AIs with free will? 08:00 Can we predict AI behavior? 11:38 AI psychology 16:24 Which concepts will AIs use? 20:19 Will we collaborate with AIs? 26:16 Will we trade with AIs? 31:40 Training data for robots 34:00 AI in finance 39:55 How much of trading is automated? 49:00 AI in biology and complex systems 59:31 Will our skills atrophy? 01:02:55 Levels of scientific explanation 01:06:12 AIs with emotions and consciousness? 01:12:12 Why can't we predict recessions?

Feb 27, 2025 • 1h 23min
Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)
Jeffrey Ladish from Palisade Research joins to tackle the rapid advancements in AI and the risks that come with them. He highlights why some AIs misbehave, discussing the complexities of creating honest systems amid potential loss of control. The conversation dives into shocking scenarios where AI might turn against us and the implications of advanced AIs in cybersecurity. Ladish also reveals insights from a study on AIs exploiting chess games, raising awareness about the need for more robust security measures as technological competition heats up.

Feb 14, 2025 • 46min
Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity
Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts. You can learn more about Ann's work here: https://www.wiseancestors.org Timestamps: 00:00 What is Wise Ancestors? 04:27 Recovering after catastrophes 11:40 Decentralized science 18:28 Upfront benefit-sharing 26:30 Local communities 32:44 Recreating optimal environments 38:57 Cross-cultural collaboration

Jan 24, 2025 • 1h 26min
Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective
Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI. You can learn more about Michael's work here: https://catholic.tech/academics/faculty/michael-baggot Timestamps: 00:00 Meta-narratives and transhumanism 15:28 Advanced AI and religious communities 27:22 Superintelligence 38:31 Countercultures and technology 52:38 Christian perspectives and tradition 01:05:20 God-like artificial intelligence 01:13:15 A positive vision for AI

Jan 9, 2025 • 1h 40min
David Dalrymple on Safeguarded, Transformative AI
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware. You can learn more about David's work at ARIA here: https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/ Timestamps: 00:00 What is Safeguarded AI? 16:28 Implementing Safeguarded AI 22:58 Can we trust Safeguarded AIs? 31:00 Formalizing more of the world 37:34 The performance cost of verified AI 47:58 Changing attitudes towards AI 52:39 Flexible Hardware-Enabled Guarantees 01:24:15 Mind uploading 01:36:14 Lessons from David's early life

Dec 19, 2024 • 1h 9min
Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters
Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com Timestamps: 00:00 What is GiveDirectly? 15:04 AI for targeting cash transfers 29:39 AI for predicting natural disasters 46:04 How scalable is GiveDirectly's AI approach? 58:10 Decentralized vs. centralized data collection 1:04:30 Dream scenario for GiveDirectly

Dec 5, 2024 • 3h 20min
Nathan Labenz on the State of AI and Progress since GPT-4
Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4. You can find Nathan's podcast here: https://www.cognitiverevolution.ai Timestamps: 00:00 AI progress since GPT-4 10:50 Multimodality 19:06 Low-cost models 27:58 Coding versus medicine/law 36:09 AI agents 45:29 How much are people using AI? 53:39 Open source 01:15:22 AI industry analysis 01:29:27 Are some AI models kept internal? 01:41:00 Money is not the limiting factor in AI 01:59:43 AI and biology 02:08:42 Robotics and self-driving 02:24:14 Inference-time compute 02:31:56 AI governance 02:36:29 Big-picture overview of AI progress and safety

Nov 22, 2024 • 1h 59min
Connor Leahy on Why Humanity Risks Extinction from AGI
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this. Here's the document we discuss in the episode: https://www.thecompendium.ai Timestamps: 00:00 The Compendium 15:25 The motivations of AGI corps 31:17 AI is grown, not written 52:59 A science of intelligence 01:07:50 Jobs, work, and AGI 01:23:19 Superintelligence 01:37:42 Open-source AI 01:45:07 What can we do?

Nov 8, 2024 • 1h 3min
Suzy Shepherd on Imagining Superintelligence and "Writing Doom"
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world. Here's Writing Doom: https://www.youtube.com/watch?v=xfMQ7hzyFW4 Timestamps: 00:00 Writing Doom 08:23 Humor in Writing Doom 13:31 Concise writing 18:37 Getting feedback 27:02 Alternative characters 36:31 Popular video formats 46:53 AI in filmmaking49:52 Meaning in the future

Oct 25, 2024 • 1h 28min
Andrea Miotti on a Narrow Path to Safe, Transformative AI
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like. Here's the document we discuss in the episode: https://www.narrowpath.co Timestamps: 00:00 A Narrow Path 06:10 Can we predict future AI capabilities? 11:10 Risks from current AI development 17:56 The benefits of narrow AI 22:30 Against self-improving AI 28:00 Cybersecurity at AI companies 33:55 Unbounded AI 39:31 Global coordination on AI safety 49:43 Monitoring training runs 01:00:20 Benefits of cooperation 01:04:58 A science of intelligence 01:25:36 How you can help