Future of Life Institute Podcast cover image

Future of Life Institute Podcast

Latest episodes

undefined
Apr 4, 2025 • 1h 13min

Brain-like AGI and why it's Dangerous (with Steven Byrnes)

Steven Byrnes, an AGI safety and alignment researcher at the Astera Institute, explores the intricacies of brain-like AGI. He discusses the differences between controlled AGI and social-instinct AGI, highlighting the relevance of human brain functions in safe AI development. Byrnes emphasizes the importance of aligning AGI motivations with human values, and the need for honesty in AI models. He also shares ways individuals can contribute to enhancing AGI safety and compares various strategies to ensure its benefit to humanity.
undefined
10 snips
Mar 28, 2025 • 1h 35min

How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)

Ege Erdil, a senior researcher at Epoch AI, dives deep into the fascinating realm of AI development and the new GATE model. He explores how evolution and brain efficiency shape our understanding of AGI requirements. Ege discusses the economic impacts of AI on labor markets and wages, highlighting which jobs are most vulnerable to automation. The conversation also touches on Moravec’s Paradox and the challenges of training complex AI models with long-term planning capabilities, emphasizing the uncertainty surrounding AI timelines and future advancements.
undefined
Mar 21, 2025 • 2h 23min

Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)

Nicholas Carlini, a security researcher at Google DeepMind, shares his expertise in adversarial machine learning and cybersecurity. He reveals intriguing insights about adversarial attacks on image classifiers and the complexities of defending against them. Carlini discusses the critical role of human intuition in developing defenses, the implications of open-source AI, and the evolving risks associated with model safety. He also explores how advanced techniques expose vulnerabilities in language models and the balance between transparency and security in AI.
undefined
Mar 13, 2025 • 1h 21min

Keep the Future Human (with Anthony Aguirre)

In a thought-provoking discussion, Anthony Aguirre, Executive Director of the Future of Life Institute, shares insights on the urgent need for responsible AI development. He emphasizes the rapid approach toward artificial general intelligence (AGI) and its potential to overshadow human roles. The conversation highlights the challenges of regulatory frameworks and the necessity for international cooperation to mitigate risks. Aguirre advocates for a balanced approach, exploring Tool AI instead of AGI, while stressing the significance of aligning AI with human values to ensure a beneficial future.
undefined
Mar 6, 2025 • 1h 16min

We Created AI. Why Don't We Understand It? (with Samir Varma)

On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness.  You can find out more about Samir's work here: https://samirvarma.com   Timestamps:  00:00 AIs with free will? 08:00 Can we predict AI behavior?  11:38 AI psychology 16:24 Which concepts will AIs use?  20:19 Will we collaborate with AIs?  26:16 Will we trade with AIs?  31:40 Training data for robots  34:00 AI in finance  39:55 How much of trading is automated?  49:00 AI in biology and complex systems 59:31 Will our skills atrophy?  01:02:55 Levels of scientific explanation  01:06:12 AIs with emotions and consciousness?  01:12:12 Why can't we predict recessions?
undefined
Feb 27, 2025 • 1h 23min

Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)

On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us.   We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here:   https://palisaderesearch.org/blog/specification-gaming  Timestamps:  00:00 The pace of AI progress  04:15 How we might lose control  07:23 Why are AIs sometimes dumb?  12:52 Benchmarks vs real world  19:11 Loss of control scenarios 26:36 Why would AI turn against us?  30:35 AIs hacking chess  36:25 Why didn't more advanced AIs hack?  41:39 Creating honest AIs  49:44 AI attackers vs AI defenders  58:27 How good is security at AI companies?  01:03:37 A sense of urgency 01:10:11 What should we do?  01:15:54 Skepticism about AI progress
undefined
Feb 14, 2025 • 46min

Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity

Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts.   You can learn more about Ann's work here:   https://www.wiseancestors.org   Timestamps:  00:00 What is Wise Ancestors?  04:27 Recovering after catastrophes 11:40 Decentralized science  18:28 Upfront benefit-sharing  26:30 Local communities  32:44 Recreating optimal environments  38:57 Cross-cultural collaboration
undefined
Jan 24, 2025 • 1h 26min

Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective

Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI.   You can learn more about Michael's work here:   https://catholic.tech/academics/faculty/michael-baggot  Timestamps:  00:00 Meta-narratives and transhumanism  15:28 Advanced AI and religious communities  27:22 Superintelligence  38:31 Countercultures and technology  52:38 Christian perspectives and tradition 01:05:20 God-like artificial intelligence  01:13:15 A positive vision for AI
undefined
Jan 9, 2025 • 1h 40min

David Dalrymple on Safeguarded, Transformative AI

David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.  You can learn more about David's work at ARIA here:   https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/   Timestamps:  00:00 What is Safeguarded AI?  16:28 Implementing Safeguarded AI 22:58 Can we trust Safeguarded AIs?  31:00 Formalizing more of the world  37:34 The performance cost of verified AI  47:58 Changing attitudes towards AI  52:39 Flexible‬‭ Hardware-Enabled‬‭ Guarantees 01:24:15 Mind uploading  01:36:14 Lessons from David's early life
undefined
Dec 19, 2024 • 1h 9min

Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters

Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com  Timestamps: 00:00 What is GiveDirectly? 15:04 AI for targeting cash transfers 29:39 AI for predicting natural disasters 46:04 How scalable is GiveDirectly's AI approach? 58:10 Decentralized vs. centralized data collection 1:04:30 Dream scenario for GiveDirectly

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode