

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

32 snips
Oct 20, 2023 • 2h 15min
Samuel Hammond on AGI and Institutional Disruption
Samuel Hammond, an expert in AGI, discusses how it will transform economies, governments, and institutions. Topics include AI's impact on the economy, transaction costs, and state power. They explore the timeline of a techno-feudalist future and alignment difficulty in AI scale.

12 snips
Oct 17, 2023 • 60min
Imagine A World: What if AI advisors helped us make better decisions?
This podcast explores a fictional world where emerging technologies shape society. Topics discussed include the arms race between advertisers and ad-filtering technologies, the addictive nature of AI-generated art, and the redistribution of wealth by corporations. The impact of technology on society, conflicts arising from AI advisors, and the portrayal of robotic assistants in fiction are also explored.

Oct 10, 2023 • 51min
Imagine A World: What if narrow AI fractured our shared reality?
Explore a future with narrow AI remaking the world, creating separate media bubbles and increasing inequality. Despite the drawbacks, AI improves medicine and therapy. The podcast discusses fictional worldbuilding, the impact of media and AI on reality, limitations of narrow AI, lack of optimism in realistic fiction, virtual celebrities and AI art, building a better future through worldbuilding, and balancing cultural perspectives.

15 snips
Oct 5, 2023 • 2h 3min
Steve Omohundro on Provably Safe AGI
Steve Omohundro, co-author of Provably Safe Systems, discusses the concept of provable safety in AI, formalizing safety, provable contracts, proof-carrying code, language models' logical thinking, AI doing proofs for us, risks of totalitarianism, tamper-proof hardware, least-privilege guarantee, basic AI drives, AI agency and world models, self-improving AI, and the overhyping of AI.

Oct 3, 2023 • 1h 4min
Imagine A World: What if AI enabled us to communicate with animals?
This podcast explores the possibility of using AI to communicate with animals and the implications of such communication. It also delves into activism, AI's impact on jobs and nature, and the concept of universal basic income. The speakers discuss AI systems, data sovereignty, and the creation of a social network. They also explore carbon tokens and their connection to incentivizing climate-positive actions. The podcast concludes by emphasizing the power of activism, storytelling, and the importance of including diverse voices in shaping AI and technology.

Sep 26, 2023 • 59min
Imagine A World: What if some people could live forever?
In this podcast, the host interviews Mako Yass, the first place winner of the FLI Worldbuilding Contest. They discuss Mako's imaginative world 'To Light' which features life-extending pills, mind-uploading technology, and wealth distribution. They also delve into topics like AGI challenges, AI alignment, and the power of storytelling in inspiring creativity.

4 snips
Sep 21, 2023 • 1h 40min
Johannes Ackva on Managing Climate Change
Johannes Ackva, an environmental activist and climate change expert, discusses the main drivers of climate change and our best technological and governmental options for managing it. Topics include renewable energy sources, nuclear energy, government subsidies, carbon taxation, planting trees, influencing government policy, different climate scenarios, and the decoupling of economic growth and emissions.

Sep 19, 2023 • 56min
Imagine A World: What if we had digital nations untethered to geography?
The podcast explores the concept of digital nations that provide representation and belonging for low-income countries affected by climate change. The speakers discuss the potential number of digital nations in the future and highlight Tuvalu's aim to become the world's first digital nation. They also explore the importance of equitable wealth distribution and borderless trade in Africa, and discuss the concept of digital persons and their potential to enhance moral knowledge. Additionally, they delve into the use of AI tools in education and encourage engagement with these ideas.

Sep 12, 2023 • 1h
Imagine A World: What if global challenges led to more centralization?
The podcast explores the concept of a unified world governed by an advanced AI system. The team behind 'Core Central' discusses the challenges of creating a cohesive imagined world and the potential of moving beyond nation states. They touch on topics such as lethal autonomous weapons, anti-aging drugs, and the increasing centralization of political unions. They also delve into the mysteries of copyright AI and the development of AI systems based on the human mind. The episode concludes with insights on the future of centralization, conflicts in beliefs, and collaboration in worldbuilding contests.

17 snips
Sep 8, 2023 • 1h 56min
Tom Davidson on How Quickly AI Could Automate the Economy
AI researcher Tom Davidson discusses the risks of AI automation, including the potential automation of AI research. Topics include the pace of AI progress, historical analogies, AI benchmarks, takeoff speed, bottlenecks, economic impacts, and the future of AI for humanity.