Future of Life Institute Podcast

Future of Life Institute
undefined
50 snips
May 16, 2025 • 1h 34min

Will Future AIs Be Conscious? (with Jeff Sebo)

Join philosopher Jeff Sebo from NYU as he navigates the intriguing landscape of artificial consciousness. He explores the nuances of measuring AI sentience and the ethical implications of granting rights to these systems. Sebo discusses substrative independence and the relationship between consciousness and cognitive complexity. He raises critical questions about AI companions, the moral status of machines, and how intuition contrasts with intellect in understanding consciousness. This thought-provoking conversation reveals the tightrope between innovation and responsibility.
undefined
101 snips
May 9, 2025 • 1h 35min

Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)

Zvi Mowshowitz, a writer focused on AI with a background in gaming and trading, dives deep into the fascinating world of artificial intelligence. He discusses the dangers of sycophantic AIs that flattery influencers, the bottlenecks limiting AI autonomy, and whether benchmarks truly measure AI success. Mowshowitz explores AI's unique features, its growing role in finance, and the implications of automating scientific research. The conversation highlights humanity's uncertain AI-led future and the need for robust safety measures as we advance.
undefined
51 snips
Apr 25, 2025 • 1h 3min

Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding)

Jeffrey Ding, an expert on US-China dynamics and AI technology at George Washington University, dives into the complex world of AI innovation and diffusion. He discusses the misconceptions around an AI arms race, contrasting the distinct strategies of the U.S. and China. Jeffrey sheds light on China's views on AI safety and the challenges of disseminating AI technology. He also shares fascinating insights from translating Chinese AI writings, emphasizing how automating translation can bridge knowledge gaps in the global tech landscape.
undefined
Apr 11, 2025 • 1h 36min

How Will We Cooperate with AIs? (with Allison Duettmann)

On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children. You can learn more about Allison's work at: https://foresight.org  Timestamps:  00:00:00 Preview 00:01:07 Centralized AI versus decentralized AI  00:13:02 Risks from decentralized AI  00:25:39 International AI governance  00:39:52 Cooperation with future AIs  00:53:51 AI for decision-making  01:05:58 Capital intensity of AI 01:09:11 Lessons from history  01:15:50 Future space law and property rights  01:27:28 Is technology invented or discovered?  01:32:34 Children in the age of AI
undefined
Apr 4, 2025 • 1h 13min

Brain-like AGI and why it's Dangerous (with Steven Byrnes)

On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.  You can learn more about Steven's work at: https://sjbyrnes.com/agi.html  Timestamps:  00:00 Preview  00:54 Brain-like AGI Safety 13:16 Controlled AGI versus Social-instinct AGI  19:12 Learning from the brain  28:36 Why is brain-like AI the most likely path to AGI?  39:23 Honesty in AI models  44:02 How to help with brain-like AGI safety  53:36 AI traits with both positive and negative effects  01:02:44 Different AI safety strategies
undefined
Mar 28, 2025 • 1h 35min

How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)

On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.  You can learn more about Ege's work at https://epoch.ai  Timestamps:  00:00:00 – Preview and introduction 00:02:59 – Compute scaling and automation - GATE model 00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements 00:29:49 – Broad Automation vs. R&D-Focused AI Deployment 00:47:19 – AI, Wages, and Labor Market Transitions 00:59:54 – Training Agentic Models and Long-Term Planning Capabilities 01:06:56 – Moravec’s Paradox and Automation of Human Skills 01:13:59 – Which Jobs Are Most Vulnerable to AI? 01:33:00 – Timeline Extremes: What Could Change AI Forecasts?
undefined
Mar 21, 2025 • 2h 23min

Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)

In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.  00:00 Nicholas Carlini's contributions to cybersecurity08:19 Understanding attack strategies 29:39 High-dimensional spaces and attack intuitions 51:00 Challenges in open-source model safety 01:00:11 Unlearning and fact editing in models 01:10:55 Adversarial examples and human robustness 01:37:03 Cryptography and AI robustness 01:55:51 Scaling AI security research
undefined
Mar 13, 2025 • 1h 21min

Keep the Future Human (with Anthony Aguirre)

On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai   AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI...  Timestamps:  00:00 What situation is humanity in? 05:00 Why AI progress is fast  09:56 Tool AI instead of AGI 15:56 The incentives of AI companies  19:13 Governments can coordinate a slowdown 25:20 The need for international coordination  31:59 Monitoring training runs  39:10 Do reasoning models undermine compute governance?  49:09 Why isn't alignment enough?  59:42 How do we decide if we want AGI?  01:02:18 Disagreement about AI  01:11:12 The early days of AI risk
undefined
Mar 6, 2025 • 1h 16min

We Created AI. Why Don't We Understand It? (with Samir Varma)

On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness.  You can find out more about Samir's work here: https://samirvarma.com   Timestamps:  00:00 AIs with free will? 08:00 Can we predict AI behavior?  11:38 AI psychology 16:24 Which concepts will AIs use?  20:19 Will we collaborate with AIs?  26:16 Will we trade with AIs?  31:40 Training data for robots  34:00 AI in finance  39:55 How much of trading is automated?  49:00 AI in biology and complex systems 59:31 Will our skills atrophy?  01:02:55 Levels of scientific explanation  01:06:12 AIs with emotions and consciousness?  01:12:12 Why can't we predict recessions?
undefined
Feb 27, 2025 • 1h 23min

Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)

On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us.   We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here:   https://palisaderesearch.org/blog/specification-gaming  Timestamps:  00:00 The pace of AI progress  04:15 How we might lose control  07:23 Why are AIs sometimes dumb?  12:52 Benchmarks vs real world  19:11 Loss of control scenarios 26:36 Why would AI turn against us?  30:35 AIs hacking chess  36:25 Why didn't more advanced AIs hack?  41:39 Creating honest AIs  49:44 AI attackers vs AI defenders  58:27 How good is security at AI companies?  01:03:37 A sense of urgency 01:10:11 What should we do?  01:15:54 Skepticism about AI progress

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app