
Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Latest episodes

17 snips
May 3, 2024 • 1h 45min
Dan Faggella on the Race to AGI
Dan Faggella, AI expert and entrepreneur, discusses AGI implications, AI power dynamics, industry implementations, and what drives AI progress in a thought-provoking podcast conversation.

18 snips
Apr 19, 2024 • 1h 27min
Liron Shapira on Superintelligence Goals
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.
Timestamps:
00:00 Intelligence as optimization-power
05:18 Will LLMs imitate human values?
07:15 Why would AI develop dangerous goals?
09:55 Goal-completeness
12:53 Alignment to which values?
22:12 Is AI just another technology?
31:20 What is FOOM?
38:59 Risks from centralized power
49:18 Can AI defend us against AI?
56:28 An Apollo program for AI safety
01:04:49 Do we only have one chance?
01:07:34 Are we living in a crucial time?
01:16:52 Would superintelligence be fragile?
01:21:42 Would human-inspired AI be safe?

9 snips
Apr 5, 2024 • 1h 26min
Annie Jacobsen on Nuclear War - a Second by Second Timeline
Annie Jacobsen, an expert on nuclear war, lays out a second-by-second timeline for nuclear war scenarios. Discussions include time pressure, detecting nuclear attacks, decisions under pressure, submarines, interceptor missiles, cyberattacks, and concentration of power.

6 snips
Mar 14, 2024 • 1h 8min
Katja Grace on the Largest Survey of AI Researchers
Katja Grace discusses AI researchers' beliefs, discontinuous progress, impacts of AI crossing human-level intelligence, intelligence explosions, and mitigating AI risk in the largest survey of AI researchers. Topics include AI arms races, slowing down AI development, and intelligence and power dynamics. Grace explores high hopes and dire concerns, AI scaling, and what AI learns from human culture.

15 snips
Feb 29, 2024 • 1h 36min
Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting
Discussion on pausing frontier AI, risks during a pause, hardware overhang, safety research, social dynamics of AI risk, and the challenges of cooperation among AGI corporations. Also, explores the impact on China and protesting AGI companies.

8 snips
Feb 16, 2024 • 58min
Sneha Revanur on the Social Effects of AI
Sneha Revanur, AI researcher, discusses the social effects of AI, ethics vs safety, humans in the loop, AI in social media, AIs identifying as AIs, AI influence in elections, and AIs interacting with human systems.

Feb 2, 2024 • 1h 31min
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
Roman Yampolskiy, AI researcher and expert, discusses whether AI is like a Shoggoth, scaling laws, evidence for AI being uncontrollable, and the safety of designing human-like AI. They also explore the limitations of AI explainability, verifiability, and alignment. The conversation touches on the challenges of integrating AI into society, deleting dangerous information from neural networks, building human-like AI for robotics, and potential obstacles to implementing language models in various industries. They conclude by discussing Good Heart's Law, a positive vision for AI, and the challenges of regulating AI investment.

14 snips
Jan 19, 2024 • 48min
Special: Flo Crivello on AI as a New Form of Life
Flo Crivello, an expert in AI and its implications for society, discusses AI as a new form of life, regulatory capture risks, the possibility of a GPU kill switch, and predicts AGI within 2-8 years. They also explore Biden's executive order on AI, regulating models or applications, and the collaboration between China and the US on AI. The podcast delves into the challenges of managing AI systems and the philosophical question of subjective experience in AI.

Jan 6, 2024 • 1h 39min
Carl Robichaud on Preventing Nuclear War
Carl Robichaud, an expert on nuclear arms race and nuclear risk, discusses topics such as the new nuclear arms race, the role of world leaders and ideology in nuclear risk, the impact of nuclear weapons on stable peace, North Korea's nuclear weapons, public perception of nuclear risk, and reaching a stable, low-risk era.

Dec 14, 2023 • 1h 43min
Frank Sauer on Autonomous Weapon Systems
Frank Sauer discusses autonomy in weapon systems, killer drones, low-tech defenses against drones, flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems.