

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Sep 17, 2019 • 1h 17min
AIAP: Synthesizing a human's preferences into a utility function with Stuart Armstrong
In his Research Agenda v0.9: Synthesizing a human's preferences into a utility function, Stuart Armstrong develops an approach for generating friendly artificial intelligence. His alignment proposal can broadly be understood as a kind of inverse reinforcement learning where most of the task of inferring human preferences is left to the AI itself. It's up to us to build the correct assumptions, definitions, preference learning methodology, and synthesis process into the AI system such that it will be able to meaningfully learn human preferences and synthesize them into an adequate utility function. In order to get this all right, his agenda looks at how to understand and identify human partial preferences, how to ultimately synthesize these learned preferences into an "adequate" utility function, the practicalities of developing and estimating the human utility function, and how this agenda can assist in other methods of AI alignment.
Topics discussed in this episode include:
-The core aspects and ideas of Stuart's research agenda
-Human values being changeable, manipulable, contradictory, and underdefined
-This research agenda in the context of the broader AI alignment landscape
-What the proposed synthesis process looks like
-How to identify human partial preferences
-Why a utility function anyway?
-Idealization and reflective equilibrium
-Open questions and potential problem areas
Here you can find the podcast page: https://futureoflife.org/2019/09/17/synthesizing-a-humans-preferences-into-a-utility-function-with-stuart-armstrong/
Important timestamps:
0:00 Introductions
3:24 A story of evolution (inspiring just-so story)
6:30 How does your “inspiring just-so story” help to inform this research agenda?
8:53 The two core parts to the research agenda
10:00 How this research agenda is contextualized in the AI alignment landscape
12:45 The fundamental ideas behind the research project
15:10 What are partial preferences?
17:50 Why reflexive self-consistency isn’t enough
20:05 How are humans contradictory and how does this affect the difficulty of the agenda?
25:30 Why human values being underdefined presents the greatest challenge
33:55 Expanding on the synthesis process
35:20 How to extract the partial preferences of the person
36:50 Why a utility function?
41:45 Are there alternative goal ordering or action producing methods for agents other than utility functions?
44:40 Extending and normalizing partial preferences and covering the rest of section 2
50:00 Moving into section 3, synthesizing the utility function in practice
52:00 Why this research agenda is helpful for other alignment methodologies
55:50 Limits of the agenda and other problems
58:40 Synthesizing a species wide utility function
1:01:20 Concerns over the alignment methodology containing leaky abstractions
1:06:10 Reflective equilibrium and the agenda not being a philosophical ideal
1:08:10 Can we check the result of the synthesis process?
01:09:55 How did the Mahatma Armstrong idealization process fail?
01:14:40 Any clarifications for the AI alignment community?
You Can take a short (4 minute) survey to share your feedback about the podcast here: www.surveymonkey.com/r/YWHDFV7

Sep 12, 2019 • 28min
Not Cool Ep 5: Ken Caldeira on energy, infrastructure, and planning for an uncertain climate future
Planning for climate change is particularly difficult because we're dealing with such big unknowns. How, exactly, will the climate change? Who will be affected and how? What new innovations are possible, and how might they help address or exacerbate the current problem? Etc. But we at least know that in order to minimize the negative effects of climate change, we need to make major structural changes — to our energy systems, to our infrastructure, to our power structures — and we need to start now. On the fifth episode of Not Cool, Ariel is joined by Ken Caldeira, who is a climate scientist at the Carnegie Institution for Science and the Department of Global Ecology and a professor at Stanford University's Department of Earth System Science. Ken shares his thoughts on the changes we need to be making, the obstacles standing in the way, and what it will take to overcome them.
Topics discussed include:
-Relationship between policy and science
-Climate deniers and why it isn't useful to argue with them
-Energy systems and replacing carbon
-Planning in the face of uncertainty
-Sociopolitical/psychological barriers to climate action
-Most urgently needed policies and actions
-Economic scope of climate solution
-Infrastructure solutions and their political viability
-Importance of political/systemic change

Sep 10, 2019 • 25min
Not Cool Ep 4: Jessica Troni on helping countries adapt to climate change
The reality is, no matter what we do going forward, we’ve already changed the climate. So while it’s critical to try to minimize those changes, it’s also important that we start to prepare for them. On Episode 4 of Not Cool, Ariel explores the concept of climate adaptation — what it means, how it’s being implemented, and where there’s still work to be done. She’s joined by Jessica Troni, head of UN Environment’s Climate Change Adaptation Unit, who talks warming scenarios, adaptation strategies, implementation barriers, and more.
Topics discussed include:
Climate adaptation: ecology-based, infrastructure
Funding sources
Barriers: financial, absorptive capacity
Developed vs. developing nations: difference in adaptation approaches, needs, etc.
UN Environment
Policy solutions
Social unrest in relation to climate
Feedback loops and runaway climate change
Warming scenarios
What individuals can do

Sep 5, 2019 • 38min
Not Cool Ep 3: Tim Lenton on climate tipping points
What is a climate tipping point, and how do we know when we’re getting close to one? On Episode 3 of Not Cool, Ariel talks to Dr. Tim Lenton, Professor and Chair in Earth System Science and Climate Change at the University of Exeter and Director of the Global Systems Institute. Tim explains the shifting system dynamics that underlie phenomena like glacial retreat and the disruption of monsoons, as well as their consequences. He also discusses how to deal with low certainty/high stakes risks, what types of policies we most need to be implementing, and how humanity’s unique self-awareness impacts our relationship with the Earth.
Topics discussed include:
Climate tipping points: impacts, warning signals
Evidence that climate is nearing tipping point?
IPCC warming targets
Risk management under uncertainty
Climate policies
Human tipping points: social, economic, technological
The Gaia Hypothesis

Sep 3, 2019 • 28min
Not Cool Ep 2: Joanna Haigh on climate modeling and the history of climate change
On the second episode of Not Cool, Ariel delves into some of the basic science behind climate change and the history of its study. She is joined by Dr. Joanna Haigh, an atmospheric physicist whose work has been foundational to our current understanding of how the climate works. Joanna is a fellow of The Royal Society and recently retired as Co-Director of the Grantham Institute on Climate Change and the Environment at Imperial College London. Here, she gives a historical overview of the field of climate science and the major breakthroughs that moved it forward. She also discusses her own work on the stratosphere, radiative forcing, solar variability, and more.
Topics discussed include:
History of the study of climate change
Overview of climate modeling
Radiative forcing
What’s changed in climate science in the past few decades
How to distinguish between natural climate variation and human-induced global warming
Solar variability, sun spots, and the effect of the sun on the climate
History of climate denial

Sep 3, 2019 • 36min
Not Cool Ep 1: John Cook on misinformation and overcoming climate silence
On the premier of Not Cool, Ariel is joined by John Cook: psychologist, climate change communication researcher, and founder of SkepticalScience.com. Much of John’s work focuses on misinformation related to climate change, how it’s propagated, and how to counter it. He offers a historical analysis of climate denial and the motivations behind it, and he debunks some of its most persistent myths. John also discusses his own research on perceived social consensus, the phenomenon he’s termed “climate silence,” and more.
Topics discussed include:
History of of the study of climate change
Climate denial: history and motivations
Persistent climate myths
How to overcome misinformation
How to talk to climate deniers
Perceived social consensus and climate silence

Sep 3, 2019 • 4min
Not Cool Prologue: A Climate Conversation
In this short trailer, Ariel Conn talks about FLI's newest podcast series, Not Cool: A Climate Conversation.
Climate change, to state the obvious, is a huge and complicated problem. Unlike the threats posed by artificial intelligence, biotechnology or nuclear weapons, you don’t need to have an advanced science degree or be a high-ranking government official to start having a meaningful impact on your own carbon footprint. Each of us can begin making lifestyle changes today that will help. We started this podcast because the news about climate change seems to get worse with each new article and report, but the solutions, at least as reported, remain vague and elusive. We wanted to hear from the scientists and experts themselves to learn what’s really going on and how we can all come together to solve this crisis.

Aug 30, 2019 • 49min
FLI Podcast: Beyond the Arms Race Narrative: AI and China with Helen Toner and Elsa Kania
Discussions of Chinese artificial intelligence often center around the trope of a U.S.-China arms race. On this month’s FLI podcast, we’re moving beyond this narrative and taking a closer look at the realities of AI in China and what they really mean for the United States. Experts Helen Toner and Elsa Kania, both of Georgetown University’s Center for Security and Emerging Technology, discuss China’s rise as a world AI power, the relationship between the Chinese tech industry and the military, and the use of AI in human rights abuses by the Chinese government. They also touch on Chinese-American technological collaboration, technological difficulties facing China, and what may determine international competitive advantage going forward.
Topics discussed in this episode include:
The rise of AI in China
The escalation of tensions between U.S. and China in AI realm
Chinese AI Development plans and policy initiatives
The AI arms race narrative and the problems with it
Civil-military fusion in China vs. U.S.
The regulation of Chinese-American technological collaboration
AI and authoritarianism
Openness in AI research and when it is (and isn’t) appropriate
The relationship between privacy and advancement in AI

Aug 16, 2019 • 1h 12min
AIAP: China's AI Superpower Dream with Jeffrey Ding
"In July 2017, The State Council of China released the New Generation Artificial Intelligence Development Plan. This policy outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. This officially marked the development of the AI sector as a national priority and it was included in President Xi Jinping’s grand vision for China." (FLI's AI Policy - China page) In the context of these developments and an increase in conversations regarding AI and China, Lucas spoke with Jeffrey Ding from the Center for the Governance of AI (GovAI). Jeffrey is the China lead for GovAI where he researches China's AI development and strategy, as well as China's approach to strategic technologies more generally.
Topics discussed in this episode include:
-China's historical relationships with technology development
-China's AI goals and some recently released principles
-Jeffrey Ding's work, Deciphering China's AI Dream
-The central drivers of AI and the resulting Chinese AI strategy
-Chinese AI capabilities
-AGI and superintelligence awareness and thinking in China
-Dispelling AI myths, promoting appropriate memes
-What healthy competition between the US and China might look like
Here you can find the page for this podcast: https://futureoflife.org/2019/08/16/chinas-ai-superpower-dream-with-jeffrey-ding/
Important timestamps:
0:00 Intro
2:14 Motivations for the conversation
5:44 Historical background on China and AI
8:13 AI principles in China and the US
16:20 Jeffrey Ding’s work, Deciphering China’s AI Dream
21:55 Does China’s government play a central hand in setting regulations?
23:25 Can Chinese implementation of regulations and standards move faster than in the US? Is China buying shares in companies to have decision making power?
27:05 The components and drivers of AI in China and how they affect Chinese AI strategy
35:30 Chinese government guidance funds for AI development
37:30 Analyzing China’s AI capabilities
44:20 Implications for the future of AI and AI strategy given the current state of the world
49:30 How important are AGI and superintelligence concerns in China?
52:30 Are there explicit technical AI research programs in China for AGI?
53:40 Dispelling AI myths and promoting appropriate memes
56:10 Relative and absolute gains in international politics
59:11 On Peter Thiel’s recent comments on superintelligence, AI, and China
1:04:10 Major updates and changes since Jeffrey wrote Deciphering China’s AI Dream
1:05:50 What does healthy competition between China and the US look like?
1:11:05 Where to follow Jeffrey and read more of his work
You Can take a short (4 minute) survey to share your feedback about the podcast here: https://www.surveymonkey.com/r/YWHDFV7
Deciphering China's AI Dream: https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
FLI AI Policy - China page: https://futureoflife.org/ai-policy-china/
ChinAI Newsletter: https://chinai.substack.com
Jeff's Twitter: https://twitter.com/jjding99
Previous podcast with Jeffrey: https://youtu.be/tm2kmSQNUAU

Aug 1, 2019 • 1h 10min
FLI Podcast: The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield
Does the climate crisis pose an existential threat? And is that even the best way to formulate the question, or should we be looking at the relationship between the climate crisis and existential threats differently? In this month’s FLI podcast, Ariel was joined by Simon Beard and Haydn Belfield of the University of Cambridge’s Center for the Study of Existential Risk (CSER), who explained why, despite the many unknowns, it might indeed make sense to study climate change as an existential threat. Simon and Haydn broke down the different systems underlying human civilization and the ways climate change threatens these systems. They also discussed our species’ unique strengths and vulnerabilities –– and the ways in which technology has heightened both –– with respect to the changing climate.


