

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Mar 28, 2025 • 1h 35min
How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)
On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines. You can learn more about Ege's work at https://epoch.ai Timestamps: 00:00:00 – Preview and introduction 00:02:59 – Compute scaling and automation - GATE model 00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements 00:29:49 – Broad Automation vs. R&D-Focused AI Deployment 00:47:19 – AI, Wages, and Labor Market Transitions 00:59:54 – Training Agentic Models and Long-Term Planning Capabilities 01:06:56 – Moravec’s Paradox and Automation of Human Skills 01:13:59 – Which Jobs Are Most Vulnerable to AI? 01:33:00 – Timeline Extremes: What Could Change AI Forecasts?

Mar 21, 2025 • 2h 23min
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research. 00:00 Nicholas Carlini's contributions to cybersecurity08:19 Understanding attack strategies 29:39 High-dimensional spaces and attack intuitions 51:00 Challenges in open-source model safety 01:00:11 Unlearning and fact editing in models 01:10:55 Adversarial examples and human robustness 01:37:03 Cryptography and AI robustness 01:55:51 Scaling AI security research

Mar 13, 2025 • 1h 21min
Keep the Future Human (with Anthony Aguirre)
On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI... Timestamps: 00:00 What situation is humanity in? 05:00 Why AI progress is fast 09:56 Tool AI instead of AGI 15:56 The incentives of AI companies 19:13 Governments can coordinate a slowdown 25:20 The need for international coordination 31:59 Monitoring training runs 39:10 Do reasoning models undermine compute governance? 49:09 Why isn't alignment enough? 59:42 How do we decide if we want AGI? 01:02:18 Disagreement about AI 01:11:12 The early days of AI risk

Mar 6, 2025 • 1h 16min
We Created AI. Why Don't We Understand It? (with Samir Varma)
On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness. You can find out more about Samir's work here: https://samirvarma.com Timestamps: 00:00 AIs with free will? 08:00 Can we predict AI behavior? 11:38 AI psychology 16:24 Which concepts will AIs use? 20:19 Will we collaborate with AIs? 26:16 Will we trade with AIs? 31:40 Training data for robots 34:00 AI in finance 39:55 How much of trading is automated? 49:00 AI in biology and complex systems 59:31 Will our skills atrophy? 01:02:55 Levels of scientific explanation 01:06:12 AIs with emotions and consciousness? 01:12:12 Why can't we predict recessions?

Feb 27, 2025 • 1h 23min
Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)
On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us. We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here: https://palisaderesearch.org/blog/specification-gaming Timestamps: 00:00 The pace of AI progress 04:15 How we might lose control 07:23 Why are AIs sometimes dumb? 12:52 Benchmarks vs real world 19:11 Loss of control scenarios 26:36 Why would AI turn against us? 30:35 AIs hacking chess 36:25 Why didn't more advanced AIs hack? 41:39 Creating honest AIs 49:44 AI attackers vs AI defenders 58:27 How good is security at AI companies? 01:03:37 A sense of urgency 01:10:11 What should we do? 01:15:54 Skepticism about AI progress

Feb 14, 2025 • 46min
Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity
Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts. You can learn more about Ann's work here: https://www.wiseancestors.org Timestamps: 00:00 What is Wise Ancestors? 04:27 Recovering after catastrophes 11:40 Decentralized science 18:28 Upfront benefit-sharing 26:30 Local communities 32:44 Recreating optimal environments 38:57 Cross-cultural collaboration

Jan 24, 2025 • 1h 26min
Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective
Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI. You can learn more about Michael's work here: https://catholic.tech/academics/faculty/michael-baggot Timestamps: 00:00 Meta-narratives and transhumanism 15:28 Advanced AI and religious communities 27:22 Superintelligence 38:31 Countercultures and technology 52:38 Christian perspectives and tradition 01:05:20 God-like artificial intelligence 01:13:15 A positive vision for AI

Jan 9, 2025 • 1h 40min
David Dalrymple on Safeguarded, Transformative AI
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware. You can learn more about David's work at ARIA here: https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/ Timestamps: 00:00 What is Safeguarded AI? 16:28 Implementing Safeguarded AI 22:58 Can we trust Safeguarded AIs? 31:00 Formalizing more of the world 37:34 The performance cost of verified AI 47:58 Changing attitudes towards AI 52:39 Flexible Hardware-Enabled Guarantees 01:24:15 Mind uploading 01:36:14 Lessons from David's early life

Dec 19, 2024 • 1h 9min
Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters
Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com Timestamps: 00:00 What is GiveDirectly? 15:04 AI for targeting cash transfers 29:39 AI for predicting natural disasters 46:04 How scalable is GiveDirectly's AI approach? 58:10 Decentralized vs. centralized data collection 1:04:30 Dream scenario for GiveDirectly

Dec 5, 2024 • 3h 20min
Nathan Labenz on the State of AI and Progress since GPT-4
Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4. You can find Nathan's podcast here: https://www.cognitiverevolution.ai Timestamps: 00:00 AI progress since GPT-4 10:50 Multimodality 19:06 Low-cost models 27:58 Coding versus medicine/law 36:09 AI agents 45:29 How much are people using AI? 53:39 Open source 01:15:22 AI industry analysis 01:29:27 Are some AI models kept internal? 01:41:00 Money is not the limiting factor in AI 01:59:43 AI and biology 02:08:42 Robotics and self-driving 02:24:14 Inference-time compute 02:31:56 AI governance 02:36:29 Big-picture overview of AI progress and safety