
Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Latest episodes

28 snips
Jul 17, 2025 • 1h 54min
How AI Could Help Overthrow Governments (with Tom Davidson)
Tom Davidson, a senior research fellow at Forethought, dives into the alarming prospect of AI-enabled coups. He discusses how advanced AI could empower covert actors to seize power and what capabilities these AIs would need for political maneuvers. The conversation highlights the unique risks of military automation and secret loyalties within organizations. Davidson outlines strategies to mitigate these emerging threats, stressing the need for transparency and regulatory frameworks to safeguard democracy against AI's influence.

95 snips
Jul 11, 2025 • 1h 45min
What Happens After Superintelligence? (with Anders Sandberg)
Anders Sandberg, a futurist and philosopher at Oxford's Future of Humanity Institute, dives into the complex implications of superintelligence. He discusses how this technology might reshape human psychology and governance, potentially leading to a post-scarcity society focused on happiness rather than wealth. Sandberg highlights the environmental challenges posed by AI, including energy demands and ecological impacts. He wraps up by addressing the intricacies of designing dependable AI systems amid rapid changes, emphasizing the balance between predictability and reliability.

95 snips
Jul 3, 2025 • 1h 10min
Why the AI Race Ends in Disaster (with Daniel Kokotajlo)
Daniel Kokotajlo, an expert in AI governance at AI-2027 and AI-Futures, discusses the groundbreaking potential of AI and its ability to outpace the Industrial Revolution. He highlights the risks of AI-driven automated coding and the necessity for transparency in AI development. The conversation also delves into the future of AI communication and the inherent risks associated with superintelligence. Additionally, Kokotajlo examines the importance of iterative forecasting in navigating the uncertainties of AI's trajectory.

55 snips
Jun 27, 2025 • 1h 4min
Preparing for an AI Economy (with Daniel Susskind)
Daniel Susskind, an economist and author, sheds light on the intersection of AI and the economy. He dives into the clash between AI researchers and economists over measuring AI's impact and how it can be steered positively. Susskind discusses the types of meaningful work that will remain for humans and questions the role of commercial incentives in AI development. He also emphasizes the evolving landscape of education, arguing for a curriculum that prioritizes adaptability and critical skills in the face of rapid technological change.

Jun 20, 2025 • 1h 27min
Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)
Ed Newton-Rex, a composer and AI expert with a background at Stability AI, dives into the complex world of copyright and AI. He discusses the ethical concerns surrounding AI-generated music and the industry's often dismissive attitude toward creator rights. Ed shares his journey resigning from Stability AI and emphasizes the need for transparency in AI training data. The conversation also touches on the future of creativity amid automation and the delicate balance between technological advancement and preserving artistic authenticity.

13 snips
Jun 13, 2025 • 1h 16min
AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)
Sarah Hastings-Woodhouse, a researcher focused on AI timelines and the psychology of AI, shares her insights on the unpredictable nature of AI development. She discusses what benchmarks actually measure and the limitations of AI capabilities. The conversation delves into the concept of alignment by default and the vagueness of leading AI companies' AGI plans. Hastings-Woodhouse also explores the psychological fallout of navigating life in a fast-paced world versus a slower one, emphasizing the need for thoughtful engagement amidst rapid technological change.

82 snips
Jun 6, 2025 • 1h 1min
Could Powerful AI Break Our Fragile World? (with Michael Nielsen)
Michael Nielsen, a scientist and writer specializing in quantum computing and AI, dives into the pressing challenges posed by advanced technology. He discusses the dual-use nature of scientific discoveries and the difficulty institutions face in adapting to rapid AI advancements. Nielsen examines the signs of dangerous AI, the latent power inherent in technology, and how governance can evolve. He also reflects on deep atheism versus optimistic cosmism, unpacking their relevance in today's AI-driven world.

12 snips
May 23, 2025 • 1h 33min
Facing Superintelligence (with Ben Goertzel)
Ben Goertzel, CEO of SingularityNet and a pioneering AI researcher since the 1970s, shares insights on the unique characteristics of today's AI boom. They discuss the importance of revisiting overlooked AI research and debate whether the first AGI will be simple or complex. Goertzel explores the challenging feasibility of aligning AGI with human values and the economic implications of this technology. He also identifies potential bottlenecks to achieving superintelligence and advocates for proactive measures humanity should take moving forward.

50 snips
May 16, 2025 • 1h 34min
Will Future AIs Be Conscious? (with Jeff Sebo)
Join philosopher Jeff Sebo from NYU as he navigates the intriguing landscape of artificial consciousness. He explores the nuances of measuring AI sentience and the ethical implications of granting rights to these systems. Sebo discusses substrative independence and the relationship between consciousness and cognitive complexity. He raises critical questions about AI companions, the moral status of machines, and how intuition contrasts with intellect in understanding consciousness. This thought-provoking conversation reveals the tightrope between innovation and responsibility.

101 snips
May 9, 2025 • 1h 35min
Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)
Zvi Mowshowitz, a writer focused on AI with a background in gaming and trading, dives deep into the fascinating world of artificial intelligence. He discusses the dangers of sycophantic AIs that flattery influencers, the bottlenecks limiting AI autonomy, and whether benchmarks truly measure AI success. Mowshowitz explores AI's unique features, its growing role in finance, and the implications of automating scientific research. The conversation highlights humanity's uncertain AI-led future and the need for robust safety measures as we advance.