Future of Life Institute Podcast

Future of Life Institute
undefined
68 snips
Jun 27, 2025 • 1h 4min

Preparing for an AI Economy (with Daniel Susskind)

Daniel Susskind, an economist and author, sheds light on the intersection of AI and the economy. He dives into the clash between AI researchers and economists over measuring AI's impact and how it can be steered positively. Susskind discusses the types of meaningful work that will remain for humans and questions the role of commercial incentives in AI development. He also emphasizes the evolving landscape of education, arguing for a curriculum that prioritizes adaptability and critical skills in the face of rapid technological change.
undefined
Jun 20, 2025 • 1h 27min

Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)

Ed Newton-Rex, a composer and AI expert with a background at Stability AI, dives into the complex world of copyright and AI. He discusses the ethical concerns surrounding AI-generated music and the industry's often dismissive attitude toward creator rights. Ed shares his journey resigning from Stability AI and emphasizes the need for transparency in AI training data. The conversation also touches on the future of creativity amid automation and the delicate balance between technological advancement and preserving artistic authenticity.
undefined
13 snips
Jun 13, 2025 • 1h 16min

AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)

Sarah Hastings-Woodhouse, a researcher focused on AI timelines and the psychology of AI, shares her insights on the unpredictable nature of AI development. She discusses what benchmarks actually measure and the limitations of AI capabilities. The conversation delves into the concept of alignment by default and the vagueness of leading AI companies' AGI plans. Hastings-Woodhouse also explores the psychological fallout of navigating life in a fast-paced world versus a slower one, emphasizing the need for thoughtful engagement amidst rapid technological change.
undefined
82 snips
Jun 6, 2025 • 1h 1min

Could Powerful AI Break Our Fragile World? (with Michael Nielsen)

Michael Nielsen, a scientist and writer specializing in quantum computing and AI, dives into the pressing challenges posed by advanced technology. He discusses the dual-use nature of scientific discoveries and the difficulty institutions face in adapting to rapid AI advancements. Nielsen examines the signs of dangerous AI, the latent power inherent in technology, and how governance can evolve. He also reflects on deep atheism versus optimistic cosmism, unpacking their relevance in today's AI-driven world.
undefined
12 snips
May 23, 2025 • 1h 33min

Facing Superintelligence (with Ben Goertzel)

Ben Goertzel, CEO of SingularityNet and a pioneering AI researcher since the 1970s, shares insights on the unique characteristics of today's AI boom. They discuss the importance of revisiting overlooked AI research and debate whether the first AGI will be simple or complex. Goertzel explores the challenging feasibility of aligning AGI with human values and the economic implications of this technology. He also identifies potential bottlenecks to achieving superintelligence and advocates for proactive measures humanity should take moving forward.
undefined
50 snips
May 16, 2025 • 1h 34min

Will Future AIs Be Conscious? (with Jeff Sebo)

Join philosopher Jeff Sebo from NYU as he navigates the intriguing landscape of artificial consciousness. He explores the nuances of measuring AI sentience and the ethical implications of granting rights to these systems. Sebo discusses substrative independence and the relationship between consciousness and cognitive complexity. He raises critical questions about AI companions, the moral status of machines, and how intuition contrasts with intellect in understanding consciousness. This thought-provoking conversation reveals the tightrope between innovation and responsibility.
undefined
101 snips
May 9, 2025 • 1h 35min

Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)

Zvi Mowshowitz, a writer focused on AI with a background in gaming and trading, dives deep into the fascinating world of artificial intelligence. He discusses the dangers of sycophantic AIs that flattery influencers, the bottlenecks limiting AI autonomy, and whether benchmarks truly measure AI success. Mowshowitz explores AI's unique features, its growing role in finance, and the implications of automating scientific research. The conversation highlights humanity's uncertain AI-led future and the need for robust safety measures as we advance.
undefined
51 snips
Apr 25, 2025 • 1h 3min

Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding)

Jeffrey Ding, an expert on US-China dynamics and AI technology at George Washington University, dives into the complex world of AI innovation and diffusion. He discusses the misconceptions around an AI arms race, contrasting the distinct strategies of the U.S. and China. Jeffrey sheds light on China's views on AI safety and the challenges of disseminating AI technology. He also shares fascinating insights from translating Chinese AI writings, emphasizing how automating translation can bridge knowledge gaps in the global tech landscape.
undefined
Apr 11, 2025 • 1h 36min

How Will We Cooperate with AIs? (with Allison Duettmann)

On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children. You can learn more about Allison's work at: https://foresight.org  Timestamps:  00:00:00 Preview 00:01:07 Centralized AI versus decentralized AI  00:13:02 Risks from decentralized AI  00:25:39 International AI governance  00:39:52 Cooperation with future AIs  00:53:51 AI for decision-making  01:05:58 Capital intensity of AI 01:09:11 Lessons from history  01:15:50 Future space law and property rights  01:27:28 Is technology invented or discovered?  01:32:34 Children in the age of AI
undefined
Apr 4, 2025 • 1h 13min

Brain-like AGI and why it's Dangerous (with Steven Byrnes)

On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.  You can learn more about Steven's work at: https://sjbyrnes.com/agi.html  Timestamps:  00:00 Preview  00:54 Brain-like AGI Safety 13:16 Controlled AGI versus Social-instinct AGI  19:12 Learning from the brain  28:36 Why is brain-like AI the most likely path to AGI?  39:23 Honesty in AI models  44:02 How to help with brain-like AGI safety  53:36 AI traits with both positive and negative effects  01:02:44 Different AI safety strategies

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app