

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Apr 1, 2021 • 1h 38min
Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures
Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures.
Topics discussed in this episode include:
-Understanding the universe through digital physics
-How human consciousness operates and is structured
-The path to aligned AGI and bottlenecks to beneficial futures
-Incentive structures and collective coordination
You can find the page for this podcast here: https://futureoflife.org/2021/03/31/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/
You can find FLI's three new policy focused job postings here: futureoflife.org/job-postings/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
3:17 What is truth and knowledge?
11:39 What is subjectivity and objectivity?
14:32 What is the universe ultimately?
19:22 Is the universe a cellular automaton? Is the universe ultimately digital or analogue?
24:05 Hilbert's hotel from the point of view of computation
35:18 Seeing the world as a fractal
38:48 Describing human consciousness
51:10 Meaning, purpose, and harvesting negentropy
55:08 The path to aligned AGI
57:37 Bottlenecks to beneficial futures and existential security
1:06:53 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures?
1:19:39 Non-duality and collective coordination
1:22:53 What difficulties are there for an idealist worldview that involves computation?
1:27:20 Which features of mind and consciousness are necessarily coupled and which aren't?
1:36:40 Joscha's final thoughts on AGI
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Mar 20, 2021 • 1h 12min
Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI
Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.
Topics discussed in this episode include:
-Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI
-The relationship between AI safety, control, and alignment
-Virtual worlds as a proposal for solving multi-multi alignment
-AI security
You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/
You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:35 Roman’s primary research interests
4:09 How theoretical proofs help AI safety research
6:23 How impossibility results constrain computer science systems
10:18 The inability to tell if arbitrary code is friendly or unfriendly
12:06 Impossibility results clarify what we can do
14:19 Roman’s results on unexplainability and incomprehensibility
22:34 Focusing on comprehensibility
26:17 Roman’s results on uncontrollability
28:33 Alignment as a subset of safety and control
30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment
33:40 What does it mean to solve AI safety?
34:19 What do the impossibility results really mean?
37:07 Virtual worlds and AI alignment
49:55 AI security and malevolent agents
53:00 Air gapping, boxing, and other security methods
58:43 Some examples of historical failures of AI systems and what we can learn from them
1:01:20 Clarifying impossibility results
1:06 55 Examples of systems failing and what these demonstrate about AI
1:08:20 Are oracles a valid approach to AI safety?
1:10:30 Roman’s final thoughts
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Feb 25, 2021 • 1h 40min
Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons
Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons.
Topics discussed in this episode include:
-The current state of the deployment and development of lethal autonomous weapons and swarm technologies
-Drone swarms as a potential weapon of mass destruction
-The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons
-The difficulty of attribution, verification, and accountability with autonomous weapons
-Autonomous weapons governance as norm setting for global AI issues
You can find the page for this podcast here: https://futureoflife.org/2021/02/25/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/
You can check out the new lethal autonomous weapons website here: https://autonomousweapons.org/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:23 Emilia Javorsky on lethal autonomous weapons
7:27 What is a lethal autonomous weapon?
11:33 Autonomous weapons that exist today
16:57 The concerns of collateral damage, accidental escalation, scalability, control, and error risk
26:57 The proliferation risk of autonomous weapons
32:30 To what extent are global superpowers pursuing these weapons? What is the state of industry's pursuit of the research and manufacturing of this technology
42:13 A possible proposal for a selective ban on small anti-personnel autonomous weapons
47:20 Lethal autonomous weapons as a potential weapon of mass destruction
53:49 The unpredictability of autonomous weapons, especially when swarms are interacting with other swarms
58:09 The risk of autonomous weapons escalating conflicts
01:10:50 The risk of drone swarms proliferating
01:20:16 The risk of assassination
01:23:25 The difficulty of attribution and accountability
01:26:05 The governance of autonomous weapons being relevant to the global governance of AI
01:30:11 The importance of verification for responsibility, accountability, and regulation
01:35:50 Concerns about the beginning of an arms race and the need for regulation
01:38:46 Wrapping up
01:39:23 Outro
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Feb 9, 2021 • 1h 46min
John Prendergast on Non-dual Awareness and Wisdom for the 21st Century
John Prendergast, former adjunct professor of psychology at the California Institute of Integral Studies, joins Lucas Perry for a discussion about the experience and effects of ego-identification, how to shift to new levels of identity, the nature of non-dual awareness, and the potential relationship between waking up and collective human problems. This is not an FLI Podcast, but a special release where Lucas shares a direction he feels has an important relationship with AI alignment and existential risk issues.
Topics discussed in this episode include:
-The experience of egocentricity and ego-identification
-Waking up into heart awareness
-The movement towards and qualities of non-dual consciousness
-The ways in which the condition of our minds collectively affect the world
-How waking up may be relevant to the creation of AGI
You can find the page for this podcast here: https://futureoflife.org/2021/02/09/john-prendergast-on-non-dual-awareness-and-wisdom-for-the-21st-century/
Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
7:10 The modern human condition
9:29 What egocentricity and ego-identification are
15:38 Moving beyond the experience of self
17:38 The origins and structure of self
20:25 A pointing out instruction for noticing ego-identification and waking up out of it
24:34 A pointing out instruction for abiding in heart-mind or heart awareness
28:53 The qualities of and moving into heart awareness and pure awareness
33:48 An explanation of non-dual awareness
40:50 Exploring the relationship between awareness, belief, and action
46:25 Growing up and improving the egoic structure
48:29 Waking up as recognizing true nature
51:04 Exploring awareness as primitive and primary
53:56 John's dream of Sri Nisargadatta Maharaj
57:57 The use and value of conceptual thought and the mind
1:00:57 The epistemics of heart-mind and the conceptual mind as we shift levels of identity
1:17:46 A pointing out instruction for inquiring into core beliefs
1:27:28 The universal heart, qualities of awakening, and the ethical implications of such shifts
1:31:38 Wisdom, waking up, and growing up for the transgenerational issues of the 21st century
1:38:44 Waking up and its applicability to the creation of AGI
1:43:25 Where to find, follow, and reach out to John
1:45:56 Outro
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jan 22, 2021 • 1h 18min
Beatrice Fihn on the Total Elimination of Nuclear Weapons
Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons (ICAN) and Nobel Peace Prize recipient, joins us to discuss the current risks of nuclear war, policies that can reduce the risks of nuclear conflict, and how to move towards a nuclear weapons free world.
Topics discussed in this episode include:
-The current nuclear weapons geopolitical situation
-The risks and mechanics of accidental and intentional nuclear war
-Policy proposals for reducing the risks of nuclear war
-Deterrence theory
-The Treaty on the Prohibition of Nuclear Weapons
-Working towards the total elimination of nuclear weapons
You can find the page for this podcast here: https://futureoflife.org/2021/01/21/beatrice-fihn-on-the-total-elimination-of-nuclear-weapons/
Timestamps:
0:00 Intro
4:28 Overview of the current nuclear weapons situation
6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war
9:27 Accidental nuclear war and human systems
12:08 The risks of nuclear war in 2021 and nuclear stability
17:49 Toxic personalities and the human component of nuclear weapons
23:23 Policy proposals for reducing the risk of nuclear war
23:55 New START Treaty
25:42 What does it mean to maintain credible deterrence
26:45 ICAN and working on the Treaty on the Prohibition of Nuclear Weapons
28:00 Deterrence theoretic arguments for nuclear weapons
32:36 The reduction of nuclear weapons, no first use, removing ground based missile systems, removing hair-trigger alert, removing presidential authority to use nuclear weapons
39:13 Arguments for and against nuclear risk reduction policy proposals
46:02 Moving all of the United State's nuclear weapons to bombers and nuclear submarines
48:27 Working towards and the theory of the total elimination of nuclear weapons
1:11:40 The value of the Treaty on the Prohibition of Nuclear Weapons
1:14:26 Elevating activism around nuclear weapons and messaging more skillfully
1:15:40 What the public needs to understand about nuclear weapons
1:16:35 World leaders' views of the treaty
1:17:15 How to get involved
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jan 8, 2021 • 1h 1min
Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year
Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021.
Topics discussed in this episode include:
-FLI's perspectives on 2020 and hopes for 2021
-What our favorite projects from 2020 were
-The biggest lessons we've learned from 2020
-What we see as crucial and needed in 2021 to ensure and make -improvements towards existential safety
You can find the page for this podcast here: https://futureoflife.org/2021/01/08/max-tegmark-and-the-fli-team-on-2020-and-existential-risk-reduction-in-the-new-year/
Timestamps:
0:00 Intro
00:52 First question: What was your favorite project from 2020?
1:03 Max Tegmark on the Future of Life Award
4:15 Anthony Aguirre on AI Loyalty
9:18 David Nicholson on the Future of Life Award
12:23 Emilia Javorksy on being a co-champion for the UN Secretary-General's effort on digital cooperation
14:03 Jared Brown on developing comments on the European Union's White Paper on AI through community collaboration
16:40 Tucker Davey on editing the biography of Victor Zhdanov
19:49 Lucas Perry on the podcast and Pindex video
23:17 Second question: What lessons do you take away from 2020?
23:26 Max Tegmark on human fragility and vulnerability
25:14 Max Tegmark on learning from history
26:47 Max Tegmark on the growing threats of AI
29:45 Anthony Aguirre on the inability of present-day institutions to deal with large unexpected problems
33:00 David Nicholson on the need for self-reflection on the use and development of technology
38:05 Emilia Javorsky on the global community coming to awareness about tail risks
39:48 Jared Brown on our vulnerability to low probability, high impact events and the importance of adaptability and policy engagement
41:43 Tucker Davey on taking existential risks more seriously and ethics-washing
43:57 Lucas Perry on the fragility of human systems
45:40 Third question: What is needed in 2021 to make progress on existential risk mitigation
45:50 Max Tegmark on holding Big Tech accountable, repairing geopolitics, and fighting the myth of the technological zero-sum game
49:58 Anthony Aguirre on the importance of spreading understanding of expected value reasoning and fixing the information crisis
53:41 David Nicholson on the need to reflect on our values and relationship with technology
54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue
56:00 Jared Brown on the need for robust government engagement
57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation
1:00:10 Outro
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Dec 11, 2020 • 1h 54min
Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox
The recipients of the 2020 Future of Life Award, William Foege, Michael Burkinsky, and Victor Zhdanov Jr., join us on this episode of the FLI Podcast to recount the story of smallpox eradication, William Foege's and Victor Zhdanov Sr.'s involvement in the eradication, and their personal experience of the events.
Topics discussed in this episode include:
-William Foege's and Victor Zhdanov's efforts to eradicate smallpox
-Personal stories from Foege's and Zhdanov's lives
-The history of smallpox
-Biological issues of the 21st century
You can find the page for this podcast here: https://futureoflife.org/2020/12/11/future-of-life-award-2020-saving-200000000-lives-by-eradicating-smallpox/
You can watch the 2020 Future of Life Award ceremony here: https://www.youtube.com/watch?v=73WQvR5iIgk&feature=emb_title&ab_channel=FutureofLifeInstitute
You can learn more about the Future of Life Award here: https://futureoflife.org/future-of-life-award/
Timestamps:
0:00 Intro
3:13 Part 1: How William Foege got into smallpox efforts and his work in Eastern Nigeria
14:12 The USSR's smallpox eradication efforts and convincing the WHO to take up global smallpox eradication
15:46 William Foege's efforts in and with the WHO for smallpox eradication
18:00 Surveillance and containment as a viable strategy
18:51 Implementing surveillance and containment throughout the world after success in West Africa
23:55 Wrapping up with eradication and dealing with the remnants of smallpox
25:35 Lab escape of smallpox in Birmingham England and the final natural case
27:20 Part 2: Introducing Michael Burkinsky as well as Victor and Katia Zhdanov
29:45 Introducing Victor Zhdanov Sr. and Alissa Zhdanov
31:05 Michael Burkinsky's memories of Victor Zhdanov Sr.
39:26 Victor Zhdanov Jr.'s memories of Victor Zhdanov Sr.
46:15 Mushrooms with meat
47:56 Stealing the family car
49:27 Victor Zhdanov Sr.'s efforts at the WHO for smallpox eradication
58:27 Exploring Alissa's book on Victor Zhdanov Sr.'s life
1:06:09 Michael's view that Victor Zhdanov Sr. is unsung, especially in Russia
1:07:18 Part 3: William Foege on the history of smallpox and biology in the 21st century
1:07:32 The origin and history of smallpox
1:10:34 The origin and history of variolation and the vaccine
1:20:15 West African "healers" who would create smallpox outbreaks
1:22:25 The safety of the smallpox vaccine vs. modern vaccines
1:29:40 A favorite story of William Foege's
1:35:50 Larry Brilliant and people central to the eradication efforts
1:37:33 Foege's perspective on modern pandemics and human bias
1:47:56 What should we do after COVID-19 ends
1:49:30 Bio-terrorism, existential risk, and synthetic pandemics
1:53:20 Foege's final thoughts on the importance of global health experts in politics
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Dec 2, 2020 • 1h 31min
Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress
Sean Carroll, theoretical physicist at Caltech, joins us on this episode of the FLI Podcast to comb through the history of human thought, the strengths and weaknesses of various intellectual movements, and how we are to situate ourselves in the 21st century given progress thus far.
Topics discussed in this episode include:
-Important intellectual movements and their merits
-The evolution of metaphysical and epistemological views over human history
-Consciousness, free will, and philosophical blunders
-Lessons for the 21st century
You can find the page for this podcast here: https://futureoflife.org/2020/12/01/sean-carroll-on-consciousness-physicalism-and-the-history-of-intellectual-progress/
You can find the video for this podcast here: https://youtu.be/6HNjL8_fsTk
Timestamps:
0:00 Intro
2:06 The problem of beliefs and the strengths and weaknesses of religion
6:40 The Age of Enlightenment and importance of reason
10:13 The importance of humility and the is--ought gap
17:53 The advantages of religion and mysticism
19:50 Materialism and Newtonianism
28:00 Duality, self, suffering, and philosophical blunders
36:56 Quantum physics as a paradigm shift
39:24 Physicalism, the problem of consciousness, and free will
01:01:50 What does it mean for something to be real?
01:09:40 The hard problem of consciousness
01:14:20 The multiple worlds interpretation of quantum mechanics and utilitarianism
01:21:16 The importance of being charitable in conversation
1:24:55 Sean's position in the philosophy of consciousness
01:27:29 Sean's metaethical position
01:29:36 Where to find and follow Sean
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Nov 17, 2020 • 1h 22min
Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity
Mohamed Abdalla, PhD student at the University of Toronto, joins us to discuss how Big Tobacco and Big Tech work to manipulate public opinion and academic institutions in order to maximize profits and avoid regulation.
Topics discussed in this episode include:
-How Big Tobacco uses it's wealth to obfuscate the harm of tobacco and appear socially responsible
-The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation
-How Big Tech and Big Tobacco work to influence universities, scientists, researchers, and policy makers
-How to combat the problem of ethics-washing in Big Tech
You can find the page for this podcast here: https://futureoflife.org/2020/11/17/mohamed-abdalla-on-big-tech-ethics-washing-and-the-threat-on-academic-integrity/
The Future of Life Institute AI policy page: https://futureoflife.org/AI-policy/
Timestamps:
0:00 Intro
1:55 How Big Tech actively distorts the academic landscape and what counts as big tech
6:00 How Big Tobacco has shaped industry research
12:17 The four tactics of Big Tobacco and Big Tech
13:34 Big Tech and Big Tobacco working to appear socially responsible
22:15 Big Tech and Big Tobacco working to influence the decisions made by funded universities
32:25 Big Tech and Big Tobacco working to influence research questions and the plans of individual scientists
51:53 Big Tech and Big Tobacco finding skeptics and critics of them and funding them to give the impression of social responsibility
1:00:24 Big Tech and being authentically socially responsible
1:11:41 Transformative AI, social responsibility, and the race to powerful AI systems
1:16:56 Ethics-washing as systemic
1:17:30 Action items for solving Ethics-washing
1:19:42 Has Mohamed received criticism for this paper?
1:20:07 Final thoughts from Mohamed
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Nov 2, 2020 • 1h 13min
Maria Arpa on the Power of Nonviolent Communication
Maria Arpa, Executive Director of the Center for Nonviolent Communication, joins the FLI Podcast to share the ins and outs of the powerful needs-based framework of nonviolent communication.
Topics discussed in this episode include:
-What nonviolent communication (NVC) consists of
-How NVC is different from normal discourse
-How NVC is composed of observations, feelings, needs, and requests
-NVC for systemic change
-Foundational assumptions in NVC
-An NVC exercise
You can find the page for this podcast here: https://futureoflife.org/2020/11/02/maria-arpa-on-the-power-of-nonviolent-communication/
Timestamps:
0:00 Intro
2:50 What is nonviolent communication?
4:05 How is NVC different from normal discourse?
18:40 NVC’s four components: observations, feelings, needs, and requests
34:50 NVC for systemic change
54:20 The foundational assumptions of NVC
58:00 An exercise in NVC
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.


