Future of Life Institute Podcast

Future of Life Institute
undefined
16 snips
Nov 14, 2025 • 2h 3min

We're Not Ready for AGI (with Will MacAskill)

Will MacAskill, a senior research fellow at Forethought and author known for his work on longtermist ethics, dives into the complexities of AI governance. He discusses moral error risks and the challenges of ensuring that AI systems reflect ethical reasoning. The conversation touches on the urgent need for space governance and how AI can reinforce biases through sycophantic behavior. MacAskill also presents the concept of 'viatopia' to emphasize flexibility in future moral choices, highlighting the importance of designing AIs for better moral reflection.
undefined
12 snips
Nov 7, 2025 • 1h 8min

What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)

Karl Koch, founder of the AI Whistleblower Initiative, dives into the urgent need for transparency and protections for insiders spotting AI safety risks. He discusses the current gaps in company policies and the critical role whistleblowing plays as a safety net. Koch offers practical steps for potential whistleblowers, emphasizing the importance of legal counsel and anonymity. The conversation also explores the challenges whistleblowers face, particularly as AI evolves rapidly, and how organizational culture needs to adapt to encourage openness.
undefined
32 snips
Oct 24, 2025 • 1h 2min

Can Machines Be Truly Creative? (with Maya Ackerman)

Maya Ackerman, an AI researcher and co-founder of WaveAI, dives into the fascinating intersection of creativity and artificial intelligence. She discusses how creativity can be defined as novel and valuable output, highlighting evolution as a creative process. Maya reveals that machine creativity differs from human creativity in speed and emotional context. The conversation touches on the role of AI in enhancing human capabilities rather than replacing them, and reframes hallucination as a vital part of imagination. Explore how AI can elevate our creativity in collaborative ways!
undefined
44 snips
Oct 14, 2025 • 47min

From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)

Parmy Olson, a technology columnist at Bloomberg and author of the award-winning book 'Supremacy', shares her insights on AI's evolution from research labs to commercial powerhouses. She discusses the impact of charismatic leaders and funding pressures on company missions, revealing how initial ideals have shifted due to investor demands. Parmy also addresses the human costs of rushed AI deployments, the challenges faced by safety teams, and the role of weak regulatory structures. Her skepticism about utopian AI narratives highlights the urgent need for stronger governance.
undefined
15 snips
Oct 3, 2025 • 1h 19min

Can Defense in Depth Work for AI? (with Adam Gleave)

Adam Gleave, co-founder and CEO of FAR.AI and an AI researcher, dives deep into AI safety and alignment challenges. He introduces his three-tier framework for AI capabilities, addressing the risks of gradual disempowerment and discusses the potential of defense-in-depth strategies. Gleave elaborates on the balance between capability and safety, uncovering practical steps to improve alignment and reduce deception. He also highlights FAR.AI's multifaceted approach to AI research, policy advocacy, and innovative hiring strategies.
undefined
67 snips
Sep 26, 2025 • 1h 7min

How We Keep Humans in Control of AI (with Beatrice Erkers)

Beatrice Erkers, who leads the Existential Hope program at the Foresight Institute, dives into the intriguing AI Pathways project. She discusses two alternatives to the rush toward AGI: Tool AI, which emphasizes human oversight, and D/ACC, which focuses on decentralized, democratic development. The conversation highlights the trade-offs between safety and speed, the potential benefits of tool AI in areas like science and governance, and how different stakeholders can help shape these safer AI futures.
undefined
34 snips
Sep 18, 2025 • 1h 40min

Why Building Superintelligence Means Human Extinction (with Nate Soares)

Nate Soares, President of the Machine Intelligence Research Institute and co-author of "If Anyone Builds It, Everyone Dies," dives into the urgent risks posed by advanced AI systems. He explains how current AI is 'grown, not crafted,' leading to unpredictable behavior. Soares highlights the peril of intelligence threshold effects and the dangers of a failed superintelligence deployment, which lacks second chances. He advocates for an international ban on superintelligence research to mitigate existential risks, stressing that humanity's current responses are insufficient.
undefined
21 snips
Sep 10, 2025 • 1h 10min

Breaking the Intelligence Curse (with Luke Drago)

Luke Drago, co-founder of Workshop Labs and co-author of 'The Intelligence Curse' essay series, dives deep into AI's economic implications. He discusses the potential for AI to diminish investment in human talent, exploring concepts like pyramid replacement in firms and socioeconomic changes. Drago highlights privacy risks associated with AI data training and the balance needed between centralized safety measures and democratization. He also emphasizes the importance of embracing career risks during this technological transition, making for a thought-provoking conversation.
undefined
90 snips
Sep 1, 2025 • 1h 36min

What Markets Tell Us About AI Timelines (with Basil Halperin)

Basil Halperin, an assistant professor of economics at the University of Virginia, dives into the intriguing world of economic indicators and their implications for AI timelines. He discusses how rising interest rates may signal market expectations of transformative AI and the complexities between strong AI benchmarks and real economic impact. The conversation also touches on market efficiency, the role of financial institutions in shaping perceptions of AI, and the potential wealth concentration effects due to advancements in AI technology.
undefined
52 snips
Aug 22, 2025 • 1h 18min

AGI Security: How We Defend the Future (with Esben Kran)

Esben Kran, Co-director of Apart Research, dives into the critical topic of AGI security, emphasizing the need for new defenses beyond traditional cybersecurity. He discusses adaptive malware called 'scentware' and the complexities of ensuring safe AI communications. Kran highlights societal shifts necessary for resilient security and argues for decentralized safety models across organizations. His insights on oversight without surveillance and the potential threats from misaligned AI reveal the urgent need for innovative governance in the age of advanced technology.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app