
Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Latest episodes

58 snips
Jun 6, 2025 • 1h 1min
Could Powerful AI Break Our Fragile World? (with Michael Nielsen)
Michael Nielsen, a scientist and writer specializing in quantum computing and AI, dives into the pressing challenges posed by advanced technology. He discusses the dual-use nature of scientific discoveries and the difficulty institutions face in adapting to rapid AI advancements. Nielsen examines the signs of dangerous AI, the latent power inherent in technology, and how governance can evolve. He also reflects on deep atheism versus optimistic cosmism, unpacking their relevance in today's AI-driven world.

12 snips
May 23, 2025 • 1h 33min
Facing Superintelligence (with Ben Goertzel)
Ben Goertzel, CEO of SingularityNet and a pioneering AI researcher since the 1970s, shares insights on the unique characteristics of today's AI boom. They discuss the importance of revisiting overlooked AI research and debate whether the first AGI will be simple or complex. Goertzel explores the challenging feasibility of aligning AGI with human values and the economic implications of this technology. He also identifies potential bottlenecks to achieving superintelligence and advocates for proactive measures humanity should take moving forward.

50 snips
May 16, 2025 • 1h 34min
Will Future AIs Be Conscious? (with Jeff Sebo)
Join philosopher Jeff Sebo from NYU as he navigates the intriguing landscape of artificial consciousness. He explores the nuances of measuring AI sentience and the ethical implications of granting rights to these systems. Sebo discusses substrative independence and the relationship between consciousness and cognitive complexity. He raises critical questions about AI companions, the moral status of machines, and how intuition contrasts with intellect in understanding consciousness. This thought-provoking conversation reveals the tightrope between innovation and responsibility.

101 snips
May 9, 2025 • 1h 35min
Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)
Zvi Mowshowitz, a writer focused on AI with a background in gaming and trading, dives deep into the fascinating world of artificial intelligence. He discusses the dangers of sycophantic AIs that flattery influencers, the bottlenecks limiting AI autonomy, and whether benchmarks truly measure AI success. Mowshowitz explores AI's unique features, its growing role in finance, and the implications of automating scientific research. The conversation highlights humanity's uncertain AI-led future and the need for robust safety measures as we advance.

51 snips
Apr 25, 2025 • 1h 3min
Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding)
Jeffrey Ding, an expert on US-China dynamics and AI technology at George Washington University, dives into the complex world of AI innovation and diffusion. He discusses the misconceptions around an AI arms race, contrasting the distinct strategies of the U.S. and China. Jeffrey sheds light on China's views on AI safety and the challenges of disseminating AI technology. He also shares fascinating insights from translating Chinese AI writings, emphasizing how automating translation can bridge knowledge gaps in the global tech landscape.

13 snips
Apr 11, 2025 • 1h 36min
How Will We Cooperate with AIs? (with Allison Duettmann)
Allison Duettmann, CEO of the Foresight Institute, focuses on decentralized AI and international governance. She discusses the balance between centralized and decentralized AI, exploring how it could shape our future interactions with technology. The conversation delves into historical lessons relevant to AI, the complexities of space law, and whether tech is invented or discovered. Additionally, Duettmann emphasizes the importance of cooperation with AIs and fostering decision-making enhancement for a better world, particularly for the next generation.

Apr 4, 2025 • 1h 13min
Brain-like AGI and why it's Dangerous (with Steven Byrnes)
Steven Byrnes, an AGI safety and alignment researcher at the Astera Institute, explores the intricacies of brain-like AGI. He discusses the differences between controlled AGI and social-instinct AGI, highlighting the relevance of human brain functions in safe AI development. Byrnes emphasizes the importance of aligning AGI motivations with human values, and the need for honesty in AI models. He also shares ways individuals can contribute to enhancing AGI safety and compares various strategies to ensure its benefit to humanity.

10 snips
Mar 28, 2025 • 1h 35min
How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)
Ege Erdil, a senior researcher at Epoch AI, dives deep into the fascinating realm of AI development and the new GATE model. He explores how evolution and brain efficiency shape our understanding of AGI requirements. Ege discusses the economic impacts of AI on labor markets and wages, highlighting which jobs are most vulnerable to automation. The conversation also touches on Moravec’s Paradox and the challenges of training complex AI models with long-term planning capabilities, emphasizing the uncertainty surrounding AI timelines and future advancements.

Mar 21, 2025 • 2h 23min
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
Nicholas Carlini, a security researcher at Google DeepMind, shares his expertise in adversarial machine learning and cybersecurity. He reveals intriguing insights about adversarial attacks on image classifiers and the complexities of defending against them. Carlini discusses the critical role of human intuition in developing defenses, the implications of open-source AI, and the evolving risks associated with model safety. He also explores how advanced techniques expose vulnerabilities in language models and the balance between transparency and security in AI.

Mar 13, 2025 • 1h 21min
Keep the Future Human (with Anthony Aguirre)
In a thought-provoking discussion, Anthony Aguirre, Executive Director of the Future of Life Institute, shares insights on the urgent need for responsible AI development. He emphasizes the rapid approach toward artificial general intelligence (AGI) and its potential to overshadow human roles. The conversation highlights the challenges of regulatory frameworks and the necessity for international cooperation to mitigate risks. Aguirre advocates for a balanced approach, exploring Tool AI instead of AGI, while stressing the significance of aligning AI with human values to ensure a beneficial future.