Future of Life Institute Podcast

Future of Life Institute
undefined
Aug 22, 2024 • 2h 16min

Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond

Samuel Hammond, a leading expert on AI implications, dives into the rapid acceleration of AI advancements. He discusses the balancing act of regulation amidst national security concerns surrounding AGI. Hammond also explores the ideological pursuit of superintelligence and compares AI's growth with historical economic transformations. He emphasizes the need for ethical frameworks in tech governance and the potential for AI to redefine human cognition and relationships. Join this enlightening conversation about the future of intelligence!
undefined
Aug 9, 2024 • 1h 3min

Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal

Anousheh Ansari, a pioneer in promoting innovation through competitions, discusses how innovation prizes can drive advancements in space, AI, quantum computing, and carbon removal. She explains the effectiveness of these prizes in attracting private investment for sustainable technologies and the intricacies of designing impactful competitions. Anousheh highlights the transformative potential of quantum computing in solving complex problems and shares her insights on the future of carbon removal strategies. Her passion for problem-solving shines through as she reflects on her journey from space explorer to innovation advocate.
undefined
Jul 25, 2024 • 30min

Mary Robinson (Former President of Ireland) on Long-View Leadership

Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org  Timestamps: 00:00 Mary's journey to presidency  05:11 Long-view leadership 06:55 Prioritizing global problems 08:38 Risks from artificial intelligence 11:55 Climate change 15:18 Barriers to global gender equality  16:28 Risk of nuclear war  20:51 Advice to future leaders  22:53 Humor in politics 24:21 Barriers to international cooperation  27:10 Institutions and technological change
undefined
Jul 11, 2024 • 1h 4min

Emilia Javorsky on how AI Concentrates Power

AI expert Emilia Javorsky discusses AI-driven power concentration and mitigation strategies, touching on techno-optimism, global monoculture, and imagining utopia. The conversation also explores open-source AI, institutions, and incentives in combating power concentration.
undefined
10 snips
Jun 21, 2024 • 1h 32min

Anton Korinek on Automating Work and the Economics of an Intelligence Explosion

Anton Korinek talks about automation's impact on wages, tasks complexity, Moravec's paradox, career transitions, intelligence explosion economics, lump of labor fallacy, universal basic income, and market structure in AI industry.
undefined
Jun 7, 2024 • 1h 36min

Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light

Christian Ruhl discusses US-China competition, risks of war, hotlines between countries, and catastrophic biological risks. Topics include the security dilemma, track two diplomacy, importance of hotlines, post-war risk reduction, biological vs. nuclear weapons, biosecurity landscape, germicidal UV light, and civilizations in collapse.
undefined
7 snips
May 24, 2024 • 37min

Christian Nunes on Deepfakes (with Max Tegmark)

Christian Nunes discusses the impact of deepfakes on women, advocating for protecting ordinary victims and promoting deepfake legislation. Topics include deepfakes and women, protecting victims, legislation, current harm, bodily autonomy, and NOW's work on AI.
undefined
17 snips
May 3, 2024 • 1h 45min

Dan Faggella on the Race to AGI

Dan Faggella, AI expert and entrepreneur, discusses AGI implications, AI power dynamics, industry implementations, and what drives AI progress in a thought-provoking podcast conversation.
undefined
18 snips
Apr 19, 2024 • 1h 27min

Liron Shapira on Superintelligence Goals

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
undefined
9 snips
Apr 5, 2024 • 1h 26min

Annie Jacobsen on Nuclear War - a Second by Second Timeline

Annie Jacobsen, an expert on nuclear war, lays out a second-by-second timeline for nuclear war scenarios. Discussions include time pressure, detecting nuclear attacks, decisions under pressure, submarines, interceptor missiles, cyberattacks, and concentration of power.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app