
Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Latest episodes

Oct 11, 2024 • 1h 30min
Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents
Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode: https://epochai.org/blog/can-ai-scaling-continue-through-2030 Timestamps: 00:00 How important is scaling? 08:03 How capable will AIs be in 2030? 18:33 AI agents, reasoning, and planning 23:39 Automating coding and mathematics 31:26 Uncertainty about investing in AI 40:34 Gap between investment and returns 45:30 Compute, software and data 51:54 Inference-time compute 01:08:49 Returns to software R&D 01:19:22 Limits to expanding compute

Sep 27, 2024 • 2h 9min
Ryan Greenblatt on AI Control, Timelines, and Slowing Down Around Human-Level AI
Ryan Greenblatt, a researcher focused on AI control and safety, dives deep into the complexities of AI alignment. He discusses the critical challenges of ensuring that powerful AI systems align with human values, stressing the need for robust safeguards against potential misalignments. Greenblatt explores the implications of AI's rapid advancements, including the risks of deception and manipulation. He emphasizes the importance of transparency in AI development while contemplating the timeline and takeoff speeds toward achieving human-level AI.

Sep 12, 2024 • 1h 20min
Tom Barnes on How to Build a Resilient World
Tom Barnes, an expert on AI capabilities and safety, shares insights on the critical imbalance in funding between AI safety and capabilities. He discusses the importance of robust safety protocols amidst rapid advancements. Barnes also explores global coordination challenges, particularly between the US and China, in navigating AI governance. He emphasizes the value of preparedness through war gaming, highlights the psychological defenses needed against AI manipulation, and advocates for patient philanthropy to foster a resilient world against AI risks.

Aug 22, 2024 • 2h 16min
Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond
Samuel Hammond, a leading expert on AI implications, dives into the rapid acceleration of AI advancements. He discusses the balancing act of regulation amidst national security concerns surrounding AGI. Hammond also explores the ideological pursuit of superintelligence and compares AI's growth with historical economic transformations. He emphasizes the need for ethical frameworks in tech governance and the potential for AI to redefine human cognition and relationships. Join this enlightening conversation about the future of intelligence!

Aug 9, 2024 • 1h 3min
Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal
Anousheh Ansari, a pioneer in promoting innovation through competitions, discusses how innovation prizes can drive advancements in space, AI, quantum computing, and carbon removal. She explains the effectiveness of these prizes in attracting private investment for sustainable technologies and the intricacies of designing impactful competitions. Anousheh highlights the transformative potential of quantum computing in solving complex problems and shares her insights on the future of carbon removal strategies. Her passion for problem-solving shines through as she reflects on her journey from space explorer to innovation advocate.

Jul 25, 2024 • 30min
Mary Robinson (Former President of Ireland) on Long-View Leadership
Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org Timestamps: 00:00 Mary's journey to presidency 05:11 Long-view leadership 06:55 Prioritizing global problems 08:38 Risks from artificial intelligence 11:55 Climate change 15:18 Barriers to global gender equality 16:28 Risk of nuclear war 20:51 Advice to future leaders 22:53 Humor in politics 24:21 Barriers to international cooperation 27:10 Institutions and technological change

Jul 11, 2024 • 1h 4min
Emilia Javorsky on how AI Concentrates Power
AI expert Emilia Javorsky discusses AI-driven power concentration and mitigation strategies, touching on techno-optimism, global monoculture, and imagining utopia. The conversation also explores open-source AI, institutions, and incentives in combating power concentration.

10 snips
Jun 21, 2024 • 1h 32min
Anton Korinek on Automating Work and the Economics of an Intelligence Explosion
Anton Korinek talks about automation's impact on wages, tasks complexity, Moravec's paradox, career transitions, intelligence explosion economics, lump of labor fallacy, universal basic income, and market structure in AI industry.

Jun 7, 2024 • 1h 36min
Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light
Christian Ruhl discusses US-China competition, risks of war, hotlines between countries, and catastrophic biological risks. Topics include the security dilemma, track two diplomacy, importance of hotlines, post-war risk reduction, biological vs. nuclear weapons, biosecurity landscape, germicidal UV light, and civilizations in collapse.

7 snips
May 24, 2024 • 37min
Christian Nunes on Deepfakes (with Max Tegmark)
Christian Nunes discusses the impact of deepfakes on women, advocating for protecting ordinary victims and promoting deepfake legislation. Topics include deepfakes and women, protecting victims, legislation, current harm, bodily autonomy, and NOW's work on AI.