
Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Latest episodes

8 snips
Feb 16, 2024 • 58min
Sneha Revanur on the Social Effects of AI
Sneha Revanur, AI researcher, discusses the social effects of AI, ethics vs safety, humans in the loop, AI in social media, AIs identifying as AIs, AI influence in elections, and AIs interacting with human systems.

Feb 2, 2024 • 1h 31min
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
Roman Yampolskiy, AI researcher and expert, discusses whether AI is like a Shoggoth, scaling laws, evidence for AI being uncontrollable, and the safety of designing human-like AI. They also explore the limitations of AI explainability, verifiability, and alignment. The conversation touches on the challenges of integrating AI into society, deleting dangerous information from neural networks, building human-like AI for robotics, and potential obstacles to implementing language models in various industries. They conclude by discussing Good Heart's Law, a positive vision for AI, and the challenges of regulating AI investment.

14 snips
Jan 19, 2024 • 48min
Special: Flo Crivello on AI as a New Form of Life
Flo Crivello, an expert in AI and its implications for society, discusses AI as a new form of life, regulatory capture risks, the possibility of a GPU kill switch, and predicts AGI within 2-8 years. They also explore Biden's executive order on AI, regulating models or applications, and the collaboration between China and the US on AI. The podcast delves into the challenges of managing AI systems and the philosophical question of subjective experience in AI.

Jan 6, 2024 • 1h 39min
Carl Robichaud on Preventing Nuclear War
Carl Robichaud, an expert on nuclear arms race and nuclear risk, discusses topics such as the new nuclear arms race, the role of world leaders and ideology in nuclear risk, the impact of nuclear weapons on stable peace, North Korea's nuclear weapons, public perception of nuclear risk, and reaching a stable, low-risk era.

Dec 14, 2023 • 1h 43min
Frank Sauer on Autonomous Weapon Systems
Frank Sauer discusses autonomy in weapon systems, killer drones, low-tech defenses against drones, flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems.

10 snips
Dec 1, 2023 • 1h 41min
Darren McKee on Uncontrollable Superintelligence
Darren McKee, AI control and alignment expert, discusses the difficulty of controlling AI, the development of AI goals and traits, and the challenges of AI alignment. They explore the speed of AI cognition, the reliability of current and future AI systems, and the need to plan for multiple AI scenarios. Additionally, they discuss the possibility of AIs seeking self-preservation and whether there is a unified solution to AI alignment.

Nov 17, 2023 • 1h 49min
Mark Brakel on the UK AI Summit and the Future of AI Policy
Mark Brakel, Director of Policy at the Future of Life Institute, talks about the AI Safety Summit in the UK, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, autonomy in weapon systems, and the importance of companies conducting risk assessments and being held legally liable for their actions.

28 snips
Nov 3, 2023 • 2h 7min
Dan Hendrycks on Catastrophic AI Risks
Dan Hendrycks, AI risk expert, discusses X.ai, evolving AI risk thinking, malicious use of AI, AI race dynamics, making AI organizations safer, and representation engineering for understanding AI traits like deception.

32 snips
Oct 20, 2023 • 2h 15min
Samuel Hammond on AGI and Institutional Disruption
Samuel Hammond, an expert in AGI, discusses how it will transform economies, governments, and institutions. Topics include AI's impact on the economy, transaction costs, and state power. They explore the timeline of a techno-feudalist future and alignment difficulty in AI scale.

12 snips
Oct 17, 2023 • 60min
Imagine A World: What if AI advisors helped us make better decisions?
This podcast explores a fictional world where emerging technologies shape society. Topics discussed include the arms race between advertisers and ad-filtering technologies, the addictive nature of AI-generated art, and the redistribution of wealth by corporations. The impact of technology on society, conflicts arising from AI advisors, and the portrayal of robotic assistants in fiction are also explored.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.