

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Oct 15, 2020 • 1h 39min
Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism
Stephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats.
Topics discussed in this episode include:
-The projects of awakening and growing the wisdom with which to manage technologies
-What might be possible of embarking on the project of waking up
-Facets of human nature that contribute to existential risk
-The dangers of the problem solving mindset
-Improving the effective altruism and existential risk communities
You can find the page for this podcast here: https://futureoflife.org/2020/10/15/stephen-batchelor-on-awakening-embracing-existential-risk-and-secular-buddhism/
Timestamps:
0:00 Intro
3:40 Albert Einstein and the quest for awakening
8:45 Non-self, emptiness, and non-duality
25:48 Stephen's conception of awakening, and making the wise more powerful vs the powerful more wise
33:32 The importance of insight
49:45 The present moment, creativity, and suffering/pain/dukkha
58:44 Stephen's article, Embracing Extinction
1:04:48 The dangers of the problem solving mindset
1:26:12 Improving the effective altruism and existential risk communities
1:37:30 Where to find and follow Stephen
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Sep 30, 2020 • 1h 46min
Kelly Wanser on Climate Change as a Possible Existential Threat
Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change.
Topics discussed in this episode include:
- The risks of climate change in the short-term
- Tipping points and tipping cascades
- Climate intervention via marine cloud brightening and releasing particles in the stratosphere
- The benefits and risks of climate intervention techniques
- The international politics of climate change and weather modification
You can find the page for this podcast here: https://futureoflife.org/2020/09/30/kelly-wanser-on-marine-cloud-brightening-for-mitigating-climate-change/
Video recording of this podcast here: https://youtu.be/CEUEFUkSMHU
Timestamps:
0:00 Intro
2:30 What is SilverLining’s mission?
4:27 Why is climate change thought to be very risky in the next 10-30 years?
8:40 Tipping points and tipping cascades
13:25 Is climate change an existential risk?
17:39 Earth systems that help to stabilize the climate
21:23 Days where it will be unsafe to work outside
25:03 Marine cloud brightening, stratospheric sunlight reflection, and other climate interventions SilverLining is interested in
41:46 What experiments are happening to understand tropospheric and stratospheric climate interventions?
50:20 International politics of weather modification
53:52 How do efforts to reduce greenhouse gas emissions fit into the project of reflecting sunlight?
57:35 How would you respond to someone who views climate intervention by marine cloud brightening as too dangerous?
59:33 What are the main points of persons skeptical of climate intervention approaches
01:13:21 The international problem of coordinating on climate change
01:24:50 Is climate change a global catastrophic or existential risk, and how does it relate to other large risks?
01:33:20 Should effective altruists spend more time on the issue of climate change and climate intervention?
01:37:48 What can listeners do to help with this issue?
01:40:00 Climate change and mars colonization
01:44:55 Where to find and follow Kelly
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Sep 16, 2020 • 1h 51min
Andrew Critch on AI Research Considerations for Human Existential Safety
In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives.
Topics discussed in this episode include:
- The mainstream computer science view of AI existential risk
- Distinguishing AI safety from AI existential safety
- The need for more precise terminology in the field of AI existential safety and alignment
- The concept of prepotent AI systems and the problem of delegation
- Which alignment problems get solved by commercial incentives and which don’t
- The threat of diffusion of responsibility on AI existential safety considerations not covered by commercial incentives
- Prepotent AI risk types that lead to unsurvivability for humanity
You can find the page for this podcast here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/
Timestamps:
0:00 Intro
2:53 Why Andrew wrote ARCHES and what it’s about
6:46 The perspective of the mainstream CS community on AI existential risk
13:03 ARCHES in relation to AI existential risk literature
16:05 The distinction between safety and existential safety
24:27 Existential risk is most likely to obtain through externalities
29:03 The relationship between existential safety and safety for current systems
33:17 Research areas that may not be solved by natural commercial incentives
51:40 What’s an AI system and an AI technology?
53:42 Prepotent AI
59:41 Misaligned prepotent AI technology
01:05:13 Human frailty
01:07:37 The importance of delegation
01:14:11 Single-single, single-multi, multi-single, and multi-multi
01:15:26 Control, instruction, and comprehension
01:20:40 The multiplicity thesis
01:22:16 Risk types from prepotent AI that lead to human unsurvivability
01:34:06 Flow-through effects
01:41:00 Multi-stakeholder objectives
01:49:08 Final words from Andrew
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Sep 3, 2020 • 1h 55min
Iason Gabriel on Foundational Philosophical Questions in AI Alignment
In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment.
Topics discussed in this episode include:
-How moral philosophy and political theory are deeply related to AI alignment
-The problem of dealing with a plurality of preferences and philosophical views in AI alignment
-How the is-ought problem and metaethics fits into alignment
-What we should be aligning AI systems to
-The importance of democratic solutions to questions of AI alignment
-The long reflection
You can find the page for this podcast here: https://futureoflife.org/2020/09/03/iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment/
Timestamps:
0:00 Intro
2:10 Why Iason wrote Artificial Intelligence, Values and Alignment
3:12 What AI alignment is
6:07 The technical and normative aspects of AI alignment
9:11 The normative being dependent on the technical
14:30 Coming up with an appropriate alignment procedure given the is-ought problem
31:15 What systems are subject to an alignment procedure?
39:55 What is it that we're trying to align AI systems to?
01:02:30 Single agent and multi agent alignment scenarios
01:27:00 What is the procedure for choosing which evaluative model(s) will be used to judge different alignment proposals
01:30:28 The long reflection
01:53:55 Where to follow and contact Iason
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Aug 18, 2020 • 1h 42min
Peter Railton on Moral Learning and Metaethics in AI Systems
From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics.
Topics discussed in this episode include:
-Moral epistemology
-The potential relevance of metaethics to AI alignment
-The importance of moral learning in AI systems
-Peter Railton's, Derek Parfit's, and Peter Singer's metaethical views
You can find the page for this podcast here: https://futureoflife.org/2020/08/18/peter-railton-on-moral-learning-and-metaethics-in-ai-systems/
Timestamps:
0:00 Intro
3:05 Does metaethics matter for AI alignment?
22:49 Long-reflection considerations
26:05 Moral learning in humans
35:07 The need for moral learning in artificial intelligence
53:57 Peter Railton's views on metaethics and his discussions with Derek Parfit
1:38:50 The need for engagement between philosophers and the AI alignment community
1:40:37 Where to find Peter's work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jul 1, 2020 • 1h 37min
Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI
It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want. Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.
Topics discussed in this episode include:
-Inner and outer alignment
-How and why inner alignment can fail
-Training competitiveness and performance competitiveness
-Evaluating imitative amplification, AI safety via debate, and microscope AI
You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/
Timestamps:
0:00 Intro
2:07 How Evan got into AI alignment research
4:42 What is AI alignment?
7:30 How Evan approaches AI alignment
13:05 What are inner alignment and outer alignment?
24:23 Gradient descent
36:30 Testing for inner alignment
38:38 Wrapping up on outer alignment
44:24 Why is inner alignment a priority?
45:30 How inner alignment fails
01:11:12 Training competitiveness and performance competitiveness
01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness
01:17:30 Imitative amplification
01:23:00 AI safety via debate
01:26:32 Microscope AI
01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment
01:34:45 Where to follow Evan and find more of his work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jun 26, 2020 • 44min
Barker - Hedonic Recalibration (Mix)
This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape.
You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/
Tracklist:
Delta Rain Dance - 1
John Beltran - A Different Dream
Rrose - Horizon
Alexandroid - lvpt3
Datassette - Drizzle Fort
Conrad Sprenger - Opening
JakoJako - Wavetable#1
Barker & David Goldberg - #3
Barker & Baumecker - Organik (Intro)
Anthony Linell - Fractal Vision
Ametsub - Skydroppin’
Ladyfish\Mewark - Comfortable
JakoJako & Barker - [unreleased]
Where to follow Sam Barker :
Soundcloud: @voltek
Twitter: twitter.com/samvoltek
Instagram: www.instagram.com/samvoltek/
Website: www.voltek-labs.net/
Bandcamp: sambarker.bandcamp.com/
Where to follow Sam's label, Ostgut Ton:
Soundcloud: @ostgutton-official
Facebook: www.facebook.com/Ostgut.Ton.OFFICIAL/
Twitter: twitter.com/ostgutton
Instagram: www.instagram.com/ostgut_ton/
Bandcamp: ostgut.bandcamp.com/
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jun 24, 2020 • 1h 42min
Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)
Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named "Utility." Sam's artistic excellence, motivated by blissful visions of the future, and David's philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David's work, how it informed his music production, and Sam and David's optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content.
Topics discussed in this episode include:
-The relationship between Sam's music and David's writing
-Existential hope
-Ideas from the Hedonistic Imperative
-Sam's albums
-The future of art and music
You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/
You can find the mix with no interview portion of the podcast here: https://soundcloud.com/futureoflife/barker-hedonic-recalibration-mix
Where to follow Sam Barker :
Soundcloud: https://soundcloud.com/voltek
Twitter: https://twitter.com/samvoltek
Instagram: https://www.instagram.com/samvoltek/
Website: https://www.voltek-labs.net/
Bandcamp: https://sambarker.bandcamp.com/
Where to follow Sam's label, Ostgut Ton:
Soundcloud: https://soundcloud.com/ostgutton-official
Facebook: https://www.facebook.com/Ostgut.Ton.OFFICIAL/
Twitter: https://twitter.com/ostgutton
Instagram: https://www.instagram.com/ostgut_ton/
Bandcamp: https://ostgut.bandcamp.com/
Timestamps:
0:00 Intro
5:40 The inspiration around Sam's music
17:38 Barker - Maximum Utility
20:03 David and Sam on their work
23:45 Do any of the tracks evoke specific visions or hopes?
24:40 Barker - Die-Hards Of The Darwinian Order
28:15 Barker - Paradise Engineering
31:20 Barker - Hedonic Treadmill
33:05 The future and evolution of art
54:03 David on how good the future can be
58:36 Guest mix by Barker
Tracklist:
Delta Rain Dance – 1
John Beltran – A Different Dream
Rrose – Horizon
Alexandroid – lvpt3
Datassette – Drizzle Fort
Conrad Sprenger – Opening
JakoJako – Wavetable#1
Barker & David Goldberg – #3
Barker & Baumecker – Organik (Intro)
Anthony Linell – Fractal Vision
Ametsub – Skydroppin’
Ladyfish\Mewark – Comfortable
JakoJako & Barker – [unreleased]
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jun 15, 2020 • 1h 53min
Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI
Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more.
Topics discussed in this episode include:
-The historical and intellectual foundations of AI
-How AI systems achieve or do not achieve intelligence in the same way as the human mind
-The rise of AI and what it signifies
-The benefits and risks of AI in both the short and long term
-Whether superintelligent AI will pose an existential risk to humanity
You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/
You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3
You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/
Timestamps:
0:00 Intro
4:30 The historical and intellectual foundations of AI
11:11 Moving beyond dualism
13:16 Regarding the objectives of an agent as fixed
17:20 The distinction between artificial intelligence and deep learning
22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind
49:46 What changes to human society does the rise of AI signal?
54:57 What are the benefits and risks of AI?
01:09:38 Do superintelligent AI systems pose an existential threat to humanity?
01:51:30 Where to find and follow Steve and Stuart
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jun 1, 2020 • 1h 33min
Sam Harris on Global Priorities, Existential Risk, and What Matters Most
Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.
Topics discussed in this episode include:
-The problem of communication
-Global priorities
-Existential risk
-Animal suffering in both wild animals and factory farmed animals
-Global poverty
-Artificial general intelligence risk and AI alignment
-Ethics
-Sam’s book, The Moral Landscape
You can find the page for this podcast here: https://futureoflife.org/2020/06/01/on-global-priorities-existential-risk-and-what-matters-most-with-sam-harris/
You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3
You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/
Timestamps:
0:00 Intro
3:52 What are the most important problems in the world?
13:14 Global priorities: existential risk
20:15 Why global catastrophic risks are more likely than existential risks
25:09 Longtermist philosophy
31:36 Making existential and global catastrophic risk more emotionally salient
34:41 How analyzing the self makes longtermism more attractive
40:28 Global priorities & effective altruism: animal suffering and global poverty
56:03 Is machine suffering the next global moral catastrophe?
59:36 AI alignment and artificial general intelligence/superintelligence risk
01:11:25 Expanding our moral circle of compassion
01:13:00 The Moral Landscape, consciousness, and moral realism
01:30:14 Can bliss and wellbeing be mathematically defined?
01:31:03 Where to follow Sam and concluding thoughts
Photo by Christopher Michel: https://www.flickr.com/photos/cmichel67/
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.


