

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Oct 18, 2021 • 25min
Future of Life Institute's $25M Grants Program for Existential Risk Reduction
Future of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program.
Topics discussed in this episode include:
- The reason Future of Life Institute is offering AI Existential Safety Grants
- Max speaks about how receiving a grant changed his career early on
- Daniel and Andrea provide details on the fellowships and future grant priorities
Check out our grants programs here: https://grants.futureoflife.org/
Join our AI Existential Safety Community:
https://futureoflife.org/team/ai-exis...
Have any feedback about the podcast? You can share your thoughts here:
https://www.surveymonkey.com/r/DRBFZCT
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Oct 1, 2021 • 58min
Filippa Lentzos on Global Catastrophic Biological Risks
Dr. Filippa Lentzos, Senior Lecturer in Science and International Security at King's College London, joins us to discuss the most pressing issues in biosecurity, big data in biology and life sciences, and governance in biological risk.
Topics discussed in this episode include:
- The most pressing issue in biosecurity
- Stories from when biosafety labs failed to contain dangerous pathogens
- The lethality of pathogens being worked on at biolaboratories
- Lessons from COVID-19
You can find the page for the podcast here:
https://futureoflife.org/2021/10/01/filippa-lentzos-on-emerging-threats-in-biosecurity/
Watch the video version of this episode here:
https://www.youtube.com/watch?v=I6M34oQ4v4w
Have any feedback about the podcast? You can share your thoughts here:
https://www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:35 What are the least understood aspects of biological risk?
8:32 Which groups are interested biotechnologies that could be used for harm?
16:30 Why countries may pursue the development of dangerous pathogens
18:45 Dr. Lentzos' strands of research
25:41 Stories from when biosafety labs failed to contain dangerous pathogens
28:34 The most pressing issue in biosecurity
31:06 What is gain of function research? What are the risks?
34:57 Examples of gain of function research
36:14 What are the benefits of gain of function research?
37:54 The lethality of pathogens being worked on at biolaboratorie
40:25 Benefits and risks of big data in biology and the life sciences
45:03 Creating a bioweather map or using big data for biodefense
48:35 Lessons from COVID-19
53:46 How does governance fit in to biological risk?
55:59 Key takeaways from Dr. Lentzos
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Sep 16, 2021 • 1h 45min
Susan Solomon and Stephen Andersen on Saving the Ozone Layer
Susan Solomon, internationally recognized atmospheric chemist, and Stephen Andersen, leader of the Montreal Protocol, join us to tell the story of the ozone hole and their roles in helping to bring us back from the brink of disaster.
Topics discussed in this episode include:
-The industrial and commercial uses of chlorofluorocarbons (CFCs)
-How we discovered the atmospheric effects of CFCs
-The Montreal Protocol and its significance
-Dr. Solomon's, Dr. Farman's, and Dr. Andersen's crucial roles in helping to solve the ozone hole crisis
-Lessons we can take away for climate change and other global catastrophic risks
You can find the page for this podcast here: https://futureoflife.org/2021/09/16/susan-solomon-and-stephen-andersen-on-saving-the-ozone-layer/
Check out the video version of the episode here: https://www.youtube.com/watch?v=7hwh-uDo-6A&ab_channel=FutureofLifeInstitute
Check out the story of the ozone hole crisis here: https://undsci.berkeley.edu/article/0_0_0/ozone_depletion_01
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
3:13 What are CFCs and what was their role in society?
7:09 James Lovelock discovering an abundance of CFCs in the lower atmosphere
12:43 F. Sherwood Rowland's and Mario Molina's research on the atmospheric science of CFCs
19:52 How a single chlorine atom from a CFC molecule can destroy a large amount of ozone
23:12 Moving from models of ozone depletion to empirical evidence of the ozone depleting mechanism
24:41 Joseph Farman and discovering the ozone hole
30:36 Susan Solomon's discovery of the surfaces of high altitude Arctic clouds being crucial for ozone depletion
47:22 The Montreal Protocol
1:00:00 Who were the key stake holders in the Montreal Protocol?
1:03:46 Stephen Andersen's efforts to phase out CFCs as the co-chair of the Montreal Protocol Technology and Economic Assessment Panel
1:13:28 The Montreal Protocol helping to prevent 11 billion metric tons of CO2 emissions per year
1:18:30 Susan and Stephen's key takeaways from their experience with the ozone hole crisis
1:24:24 What world did we avoid through our efforts to save the ozone layer?
1:28:37 The lessons Stephen and Susan take away from their experience working to phase out CFCs from industry
1:34:30 Is action on climate change practical?
1:40:34 Does the Paris Agreement have something like the Montreal Protocol Technology and Economic Assessment Panel?
1:43:23 Final words from Susan and Stephen
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Sep 7, 2021 • 1h 38min
James Manyika on Global Economic and Technological Trends
James Manyika, Chairman and Director of the McKinsey Global Institute, joins us to discuss the rapidly evolving landscape of the modern global economy and the role of technology in it.
Topics discussed in this episode include:
-The modern social contract
-Reskilling, wage stagnation, and inequality
-Technology induced unemployment
-The structure of the global economy
-The geographic concentration of economic growth
You can find the page for this podcast here: https://futureoflife.org/2021/09/06/james-manyika-on-global-economic-and-technological-trends/
Check out the video version of the episode here: https://youtu.be/zLXmFiwT0-M
Check out the McKinsey Global Institute here: https://www.mckinsey.com/mgi/overview
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:14 What are the most important problems in the world today?
4:30 The issue of inequality
8:17 How the structure of the global economy is changing
10:21 How does the role of incentives fit into global issues?
13:00 How the social contract has evolved in the 21st century
18:20 A billion people lifted out of poverty
19:04 What drives economic growth?
29:28 How does AI automation affect the virtuous and vicious versions of productivity growth?
38:06 Automation and reflecting on jobs lost, jobs gained, and jobs changed
43:15 AGI and automation
48:00 How do we address the issue of technology induced unemployment
58:05 Developing countries and economies
1:01:29 The central forces in the global economy
1:07:36 The global economic center of gravity
1:09:42 Understanding the core impacts of AI
1:12:32 How do global catastrophic and existential risks fit into the modern global economy?
1:17:52 The economics of climate change and AI risk
1:20:50 Will we use AI technology like we've used fossil fuel technology?
1:24:34 The risks of AI contributing to inequality and bias
1:31:45 How do we integrate developing countries voices in the development and deployment of AI systems
1:33:42 James' core takeaway
1:37:19 Where to follow and learn more about James' work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jul 30, 2021 • 1h 35min
Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse
Michael Klare, Five College Professor of Peace & World Security Studies, joins us to discuss the Pentagon's view of climate change, why it's distinctive, and how this all ultimately relates to the risks of great powers conflict and state collapse.
Topics discussed in this episode include:
-How the US military views and takes action on climate change
-Examples of existing climate related difficulties and what they tell us about the future
-Threat multiplication from climate change
-The risks of climate change catalyzed nuclear war and major conflict
-The melting of the Arctic and the geopolitical situation which arises from that
-Messaging on climate change
You can find the page for this podcast here: https://futureoflife.org/2021/07/30/michael-klare-on-the-pentagons-view-of-climate-change-and-the-risks-of-state-collapse/
Check out the video version of the episode here: https://www.youtube.com/watch?v=bn57jxEoW24
Check out Michael's website here: http://michaelklare.com/
Apply for the Podcast Producer position here: futureoflife.org/job-postings/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:28 How does the Pentagon view climate change and why are they interested in it?
5:30 What are the Pentagon's main priorities besides climate change?
8:31 What are the objectives of career officers at the Pentagon and how do they see climate change?
10:32 The relationship between Pentagon career officers and the Trump administration on climate change
15:47 How is the Pentagon's view of climate change unique and important?
19:54 How climate change exacerbates existing difficulties and the issue of threat multiplication
24:25 How will climate change increase the tensions between the nuclear weapons states of India, Pakistan, and China?
26:32 What happened to Tacloban City and how is it relevant?
32:27 Why does the US military provide global humanitarian assistance?
34:39 How has climate change impacted the conditions in Nigeria and how does this inform the Pentagon's perspective?
39:40 What is the ladder of escalation for climate change related issues?
46:54 What is "all hell breaking loose?"
48:26 What is the geopolitical situation arising from the melting of the Arctic?
52:48 Why does the Bering Strait matter for the Arctic?
54:23 The Arctic as a main source of conflict for the great powers in the coming years
58:01 Are there ongoing proposals for resolving territorial disputes in the Arctic?
1:01:40 Nuclear weapons risk and climate change
1:03:32 How does the Pentagon intend to address climate change?
1:06:20 Hardening US military bases and going green
1:11:50 How climate change will affect critical infrastructure
1:15:47 How do lethal autonomous weapons fit into the risks of escalation in a world stressed by climate change?
1:19:42 How does this all affect existential risk?
1:24:39 Are there timelines for when climate change induced stresses will occur?
1:27:03 Does tying existential risks to national security issues benefit awareness around existential risk?
1:30:18 Does relating climate change to migration issues help with climate messaging?
1:31:08 A summary of the Pentagon's interest, view, and action on climate change
1:33:00 Final words from Michael
1:34:33 Where to find more of Michael's work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jul 9, 2021 • 41min
Avi Loeb on UFOs and if they're Alien in Origin
Avi Loeb, Professor of Science at Harvard University, joins us to discuss unidentified aerial phenomena and a recent US Government report assessing their existence and threat.
Topics discussed in this episode include:
-Evidence counting for the natural, human, and extraterrestrial origins of UAPs
-The culture of science and how it deals with UAP reports
-How humanity should respond if we discover UAPs are alien in origin
-A project for collecting high quality data on UAPs
You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-ufos-and-if-theyre-alien-in-origin/
Apply for the Podcast Producer position here: futureoflife.org/job-postings/
Check out the video version of the episode here: https://www.youtube.com/watch?v=AyNlLaFTeFI&ab_channel=FutureofLifeInstitute
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
1:41 Why is the US Government report on UAPs significant?
7:08 Multiple different sensors detecting the same phenomena
11:50 Are UAPs a US technology?
13:20 Incentives to deploy powerful technology
15:48 What are the flight and capability characteristics of UAPs?
17:53 The similarities between 'Oumuamua and UAP reports
20:11 Are UAPs some form of spoofing technology?
22:48 What is the most convincing natural or conventional explanation of UAPs?
25:09 UAPs as potentially containing artificial intelligence
28:15 Can you give a credence to UAPs being alien in origin?
29:32 Why aren't UAPs far more technologically advanced?
32:15 How should humanity respond if UAPs are found to be alien in origin?
35:15 A plan to get better data on UAPs
38:56 Final thoughts from Avi
39:40 Getting in contact with Avi to support his project
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jul 9, 2021 • 2h 4min
Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures
Avi Loeb, Professor of Science at Harvard University, joins us to discuss a recent interstellar visitor, if we've already encountered alien technology, and whether we're ultimately alone in the cosmos.
Topics discussed in this episode include:
-Whether 'Oumuamua is alien or natural in origin
-The culture of science and how it affects fruitful inquiry
-Looking for signs of alien life throughout the solar system and beyond
-Alien artefacts and galactic treaties
-How humanity should handle a potential first contact with extraterrestrials
-The relationship between what is true and what is good
You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-oumuamua-aliens-space-archeology-great-filters-and-superstructures/
Apply for the Podcast Producer position here: https://futureoflife.org/job-postings/
Check out the video version of the episode here: https://www.youtube.com/watch?v=qcxJ8QZQkwE&ab_channel=FutureofLifeInstitute
See our second interview with Avi here: https://soundcloud.com/futureoflife/avi-loeb-on-ufos-and-if-theyre-alien-in-origin
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
3:28 What is 'Oumuamua's wager?
11:29 The properties of 'Oumuamua and how they lend credence to the theory of it being artificial in origin
17:23 Theories of 'Oumuamua being natural in origin
21:42 Why was the smooth acceleration of 'Oumuamua significant?
23:35 What are comets and asteroids?
28:30 What we know about Oort clouds and how 'Oumuamua relates to what we expect of Oort clouds
33:40 Could there be exotic objects in Oort clouds that would account for 'Oumuamua
38:08 What is your credence that 'Oumuamua is alien in origin?
44:50 Bayesian reasoning and 'Oumuamua
46:34 How do UFO reports and sightings affect your perspective of 'Oumuamua?
54:35 Might alien artefacts be more common than we expect?
58:48 The Drake equation
1:01:50 Where are the most likely great filters?
1:11:22 Difficulties in scientific culture and how they affect fruitful inquiry
1:27:03 The cosmic endowment, traveling to galactic clusters, and galactic treaties
1:31:34 Why don't we find evidence of alien superstructures?
1:36:36 Looking for the bio and techno signatures of alien life
1:40:27 Do alien civilizations converge on beneficence?
1:43:05 Is there a necessary relationship between what is true and good?
1:47:02 Is morality evidence based knowledge?
1:48:18 Axiomatic based knowledge and testing moral systems
1:54:08 International governance and making contact with alien life
1:55:59 The need for an elite scientific body to advise on global catastrophic and existential risk
1:59:57 What are the most fundamental questions?
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jun 1, 2021 • 1h 8min
Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI
Nicolas Berggruen, investor and philanthropist, joins us to explore the dynamics of power, wisdom, technology and ideas in the 21st century.
Topics discussed in this episode include:
-What wisdom consists of
-The role of ideas in society and civilization
-The increasing concentration of power and wealth
-The technological displacement of human labor
-Democracy, universal basic income, and universal basic capital
-Living an examined life
You can find the page for this podcast here: https://futureoflife.org/2021/05/31/nicolas-berggruen-on-the-dynamics-of-power-wisdom-technology-and-ideas-in-the-age-of-ai/
Check out Nicolas' thoughts archive here: www.nicolasberggruen.com
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
1:45 The race between the power of our technology and the wisdom with which we manage it
5:19 What is wisdom?
8:30 The power of ideas
11:06 Humanity’s investment in wisdom vs the power of our technology
15:39 Why does our wisdom lag behind our power?
20:51 Technology evolving into an agent
24:28 How ideas play a role in the value alignment of technology
30:14 Wisdom for building beneficial AI and mitigating the race to power
34:37 Does Mark Zuckerberg have control of Facebook?
36:39 Safeguarding the human mind and maintaining control of AI
42:26 The importance of the examined life in the 21st century
45:56 An example of the examined life
48:54 Important ideas for the 21st century
52:46 The concentration of power and wealth, and a proposal for universal basic capital
1:03:07 Negative and positive futures
1:06:30 Final thoughts from Nicolas
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

May 20, 2021 • 1h 41min
Bart Selman on the Promises and Perils of Artificial Intelligence
Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence.
Topics discussed in this episode include:
-Negative and positive outcomes from AI in the short, medium, and long-terms
-The perils and promises of AGI and superintelligence
-AI alignment and AI existential risk
-Lethal autonomous weapons
-AI governance and racing to powerful AI systems
-AI consciousness
You can find the page for this podcast here: https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
1:35 Futures that Bart is excited about
4:08 Positive futures in the short, medium, and long-terms
7:23 AGI timelines
8:11 Bart’s research on “planning” through the game of Sokoban
13:10 If we don’t go extinct, is the creation of AGI and superintelligence inevitable?
15:28 What’s exciting about futures with AGI and superintelligence?
17:10 How long does it take for superintelligence to arise after AGI?
21:08 Would a superintelligence have something intelligent to say about income inequality?
23:24 Are there true or false answers to moral questions?
25:30 Can AGI and superintelligence assist with moral and philosophical issues?
28:07 Do you think superintelligences converge on ethics?
29:32 Are you most excited about the short or long-term benefits of AI?
34:30 Is existential risk from AI a legitimate threat?
35:22 Is the AI alignment problem legitimate?
43:29 What are futures that you fear?
46:24 Do social media algorithms represent an instance of the alignment problem?
51:46 The importance of educating the public on AI
55:00 Income inequality, cyber security, and negative futures
1:00:06 Lethal autonomous weapons
1:01:50 Negative futures in the long-term
1:03:26 How have your views of AI alignment evolved?
1:06:53 Bart’s plans and intentions for the Association for the Advancement of Artificial Intelligence
1:13:45 Policy recommendations for existing AIs and the AI ecosystem
1:15:35 Solving the parts of the AI alignment that won’t be solved by industry incentives
1:18:17 Narratives of an international race to powerful AI systems
1:20:42 How does an international race to AI affect the chances of successful AI alignment?
1:23:20 Is AI a zero sum game?
1:28:51 Lethal autonomous weapons governance
1:31:38 Does the governance of autonomous weapons affect outcomes from AGI
1:33:00 AI consciousness
1:39:37 Alignment is important and the benefits of AI can be great
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Apr 21, 2021 • 1h 27min
Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century
Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century.
Topics discussed in this episode include:
-Intelligence and coordination
-Existential risk from AI, synthetic biology, and unknown unknowns
-AI adoption as a delegation process
-Jaan's investments and philanthropic efforts
-International coordination and incentive structures
-The short-term and long-term AI safety communities
You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
1:29 How can humanity improve?
3:10 The importance of intelligence and coordination
8:30 The bottlenecks of input and output bandwidth as well as processing speed between AIs and humans
15:20 Making the creation of AI feel dangerous and how the nuclear power industry killed itself by downplaying risks
17:15 How Jaan evaluates and thinks about existential risk
18:30 Nuclear weapons as the first existential risk we faced
20:47 The likelihood of unknown unknown existential risks
25:04 Why Jaan doesn't see nuclear war as an existential risk
27:54 Climate change
29:00 Existential risk from synthetic biology
31:29 Learning from mistakes, lacking foresight, and the importance of generational knowledge
36:23 AI adoption as a delegation process
42:52 Attractors in the design space of AI
44:24 The regulation of AI
45:31 Jaan's investments and philanthropy in AI
55:18 International coordination issues from AI adoption as a delegation process
57:29 AI today and the negative impacts of recommender algorithms
1:02:43 Collective, institutional, and interpersonal coordination
1:05:23 The benefits and risks of longevity research
1:08:29 The long-term and short-term AI safety communities and their relationship with one another
1:12:35 Jaan's current philanthropic efforts
1:16:28 Software as a philanthropic target
1:19:03 How do we move towards beneficial futures with AI?
1:22:30 An idea Jaan finds meaningful
1:23:33 Final thoughts from Jaan
1:25:27 Where to find Jaan
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.


