Future of Life Institute Podcast cover image

Future of Life Institute Podcast

Latest episodes

undefined
Jul 25, 2024 • 30min

Mary Robinson (Former President of Ireland) on Long-View Leadership

Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org  Timestamps: 00:00 Mary's journey to presidency  05:11 Long-view leadership 06:55 Prioritizing global problems 08:38 Risks from artificial intelligence 11:55 Climate change 15:18 Barriers to global gender equality  16:28 Risk of nuclear war  20:51 Advice to future leaders  22:53 Humor in politics 24:21 Barriers to international cooperation  27:10 Institutions and technological change
undefined
Jul 11, 2024 • 1h 4min

Emilia Javorsky on how AI Concentrates Power

AI expert Emilia Javorsky discusses AI-driven power concentration and mitigation strategies, touching on techno-optimism, global monoculture, and imagining utopia. The conversation also explores open-source AI, institutions, and incentives in combating power concentration.
undefined
10 snips
Jun 21, 2024 • 1h 32min

Anton Korinek on Automating Work and the Economics of an Intelligence Explosion

Anton Korinek talks about automation's impact on wages, tasks complexity, Moravec's paradox, career transitions, intelligence explosion economics, lump of labor fallacy, universal basic income, and market structure in AI industry.
undefined
Jun 7, 2024 • 1h 36min

Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light

Christian Ruhl discusses US-China competition, risks of war, hotlines between countries, and catastrophic biological risks. Topics include the security dilemma, track two diplomacy, importance of hotlines, post-war risk reduction, biological vs. nuclear weapons, biosecurity landscape, germicidal UV light, and civilizations in collapse.
undefined
7 snips
May 24, 2024 • 37min

Christian Nunes on Deepfakes (with Max Tegmark)

Christian Nunes discusses the impact of deepfakes on women, advocating for protecting ordinary victims and promoting deepfake legislation. Topics include deepfakes and women, protecting victims, legislation, current harm, bodily autonomy, and NOW's work on AI.
undefined
17 snips
May 3, 2024 • 1h 45min

Dan Faggella on the Race to AGI

Dan Faggella, AI expert and entrepreneur, discusses AGI implications, AI power dynamics, industry implementations, and what drives AI progress in a thought-provoking podcast conversation.
undefined
18 snips
Apr 19, 2024 • 1h 27min

Liron Shapira on Superintelligence Goals

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
undefined
9 snips
Apr 5, 2024 • 1h 26min

Annie Jacobsen on Nuclear War - a Second by Second Timeline

Annie Jacobsen, an expert on nuclear war, lays out a second-by-second timeline for nuclear war scenarios. Discussions include time pressure, detecting nuclear attacks, decisions under pressure, submarines, interceptor missiles, cyberattacks, and concentration of power.
undefined
6 snips
Mar 14, 2024 • 1h 8min

Katja Grace on the Largest Survey of AI Researchers

Katja Grace discusses AI researchers' beliefs, discontinuous progress, impacts of AI crossing human-level intelligence, intelligence explosions, and mitigating AI risk in the largest survey of AI researchers. Topics include AI arms races, slowing down AI development, and intelligence and power dynamics. Grace explores high hopes and dire concerns, AI scaling, and what AI learns from human culture.
undefined
15 snips
Feb 29, 2024 • 1h 36min

Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

Discussion on pausing frontier AI, risks during a pause, hardware overhang, safety research, social dynamics of AI risk, and the challenges of cooperation among AGI corporations. Also, explores the impact on China and protesting AGI companies.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner