

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Jul 22, 2019 • 1h 14min
AIAP: On the Governance of AI with Jade Leung
In this podcast, Lucas spoke with Jade Leung from the Center for the Governance of AI (GovAI). GovAI strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. The center focuses on the political challenges arising from transformative AI, and they seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, where her research focuses on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.
Topics discussed in this episode include:
-The landscape of AI governance
-The Center for the Governance of AI’s research agenda and priorities
-Aligning government and companies with ideal governance and the common good
-Norms and efforts in the AI alignment community in this space
-Technical AI alignment vs. AI Governance vs. malicious use cases
-Lethal autonomous weapons
-Where we are in terms of our efforts and what further work is needed in this space
You can take a short (3 minute) survey to share your feedback about the podcast here: www.surveymonkey.com/r/YWHDFV7

Jun 28, 2019 • 38min
FLI Podcast: Is Nuclear Weapons Testing Back on the Horizon? With Jeffrey Lewis and Alex Bell
Nuclear weapons testing is mostly a thing of the past: The last nuclear weapon test explosion on US soil was conducted over 25 years ago. But how much longer can nuclear weapons testing remain a taboo that almost no country will violate?
In an official statement from the end of May, the Director of the U.S. Defense Intelligence Agency (DIA) expressed the belief that both Russia and China were preparing for explosive tests of low-yield nuclear weapons, if not already testing. Such accusations could potentially be used by the U.S. to justify a breach of the Comprehensive Nuclear-Test-Ban Treaty (CTBT).
This month, Ariel was joined by Jeffrey Lewis, Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies and founder of armscontrolwonk.com, and Alex Bell, Senior Policy Director at the Center for Arms Control and Non-Proliferation. Lewis and Bell discuss the DIA’s allegations, the history of the CTBT, why it’s in the U.S. interest to ratify the treaty, and more.
Topics discussed in this episode:
- The validity of the U.S. allegations --Is Russia really testing weapons?
- The International Monitoring System -- How effective is it if the treaty isn’t in effect?
- The modernization of U.S/Russian/Chinese nuclear arsenals and what that means.
- Why there’s a push for nuclear testing.
- Why opposing nuclear testing can help ensure the US maintains nuclear superiority.

May 31, 2019 • 39min
FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi
In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let's build our technology around our visions for the future.

May 23, 2019 • 1h 27min
AIAP: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson
Consciousness is a concept which is at the forefront of much scientific and philosophical thinking. At the same time, there is large disagreement over what consciousness exactly is and whether it can be fully captured by science or is best explained away by a reductionist understanding. Some believe consciousness to be the source of all value and others take it to be a kind of delusion or confusion generated by algorithms in the brain. The Qualia Research Institute takes consciousness to be something substantial and real in the world that they expect can be captured by the language and tools of science and mathematics. To understand this position, we will have to unpack the philosophical motivations which inform this view, the intuition pumps which lend themselves to these motivations, and then explore the scientific process of investigation which is born of these considerations. Whether you take consciousness to be something real or illusory, the implications of these possibilities certainly have tremendous moral and empirical implications for life's purpose and role in the universe. Is existence without consciousness meaningful?
In this podcast, Lucas spoke with Mike Johnson and Andrés Gómez Emilsson of the Qualia Research Institute. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master's in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder. Mike is interested in neuroscience, philosophy of mind, and complexity theory.
Topics discussed in this episode include:
-Functionalism and qualia realism
-Views that are skeptical of consciousness
-What we mean by consciousness
-Consciousness and casuality
-Marr's levels of analysis
-Core problem areas in thinking about consciousness
-The Symmetry Theory of Valence
-AI alignment and consciousness
You can take a very short survey about the podcast here: https://www.surveymonkey.com/r/YWHDFV7

Apr 30, 2019 • 51min
The Unexpected Side Effects of Climate Change with Fran Moore and Nick Obradovich
It’s not just about the natural world. The side effects of climate change remain relatively unknown, but we can expect a warming world to impact every facet of our lives. In fact, as recent research shows, global warming is already affecting our mental and physical well-being, and this impact will only increase. Climate change could decrease the efficacy of our public safety institutions. It could damage our economies. It could even impact the way that we vote, potentially altering our democracies themselves. Yet even as these effects begin to appear, we’re already growing numb to the changing climate patterns behind them, and we’re failing to act.
In honor of Earth Day, this month’s podcast focuses on these side effects and what we can do about them. Ariel spoke with Dr. Nick Obradovich, a research scientist at the MIT Media Lab, and Dr. Fran Moore, an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. They study the social and economic impacts of climate change, and they shared some of their most remarkable findings.
Topics discussed in this episode include:
- How getting used to climate change may make it harder for us to address the issue
- The social cost of carbon
- The effect of temperature on mood, exercise, and sleep
- The effect of temperature on public safety and democratic processes
- Why it’s hard to get people to act
- What we can all do to make a difference
- Why we should still be hopeful

Apr 25, 2019 • 1h 7min
AIAP: An Overview of Technical AI Alignment with Rohin Shah (Part 2)
The space of AI alignment research is highly dynamic, and it's often difficult to get a bird's eye view of the landscape. This podcast is the second of two parts attempting to partially remedy this by providing an overview of technical AI alignment efforts. In particular, this episode seeks to continue the discussion from Part 1 by going in more depth with regards to the specific approaches to AI alignment. In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter.
Topics discussed in this episode include:
-Embedded agency
-The field of "getting AI systems to do what we want"
-Ambitious value learning
-Corrigibility, including iterated amplification, debate, and factored cognition
-AI boxing and impact measures
-Robustness through verification, adverserial ML, and adverserial examples
-Interpretability research
-Comprehensive AI Services
-Rohin's relative optimism about the state of AI alignment
You can take a short (3 minute) survey to share your feedback about the podcast here: https://www.surveymonkey.com/r/YWHDFV7

Apr 11, 2019 • 1h 17min
AIAP: An Overview of Technical AI Alignment with Rohin Shah (Part 1)
The space of AI alignment research is highly dynamic, and it's often difficult to get a bird's eye view of the landscape. This podcast is the first of two parts attempting to partially remedy this by providing an overview of the organizations participating in technical AI research, their specific research directions, and how these approaches all come together to make up the state of technical AI alignment efforts. In this first part, Rohin moves sequentially through the technical research organizations in this space and carves through the field by its varying research philosophies. We also dive into the specifics of many different approaches to AI safety, explore where they disagree, discuss what properties varying approaches attempt to develop/preserve, and hear Rohin's take on these different approaches.
You can take a short (3 minute) survey to share your feedback about the podcast here: https://www.surveymonkey.com/r/YWHDFV7
In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter.
Topics discussed in this episode include:
- The perspectives of CHAI, MIRI, OpenAI, DeepMind, FHI, and others
- Where and why they disagree on technical alignment
- The kinds of properties and features we are trying to ensure in our AI systems
- What Rohin is excited and optimistic about
- Rohin's recommended reading and advice for improving at AI alignment research

Apr 3, 2019 • 49min
Why Ban Lethal Autonomous Weapons
Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion.
We've compiled their arguments, along with many of our own, and now, we want to turn the discussion over to you. We’ve set up a comments section on the FLI podcast page (www.futureoflife.org/whyban), and we want to know: Which argument(s) do you find most compelling? Why?

Mar 7, 2019 • 1h 10min
AIAP: AI Alignment through Debate with Geoffrey Irving
See full article here: https://futureoflife.org/2019/03/06/ai-alignment-through-debate-with-geoffrey-irving/
"To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information... In practice, whether debate works involves empirical questions about humans and the tasks we want AIs to perform, plus theoretical questions about the meaning of AI alignment. " AI safety via debate (https://arxiv.org/pdf/1805.00899.pdf)
Debate is something that we are all familiar with. Usually it involves two or more persons giving arguments and counter arguments over some question in order to prove a conclusion. At OpenAI, debate is being explored as an AI alignment methodology for reward learning (learning what humans want) and is a part of their scalability efforts (how to train/evolve systems to solve questions of increasing complexity). Debate might sometimes seem like a fruitless process, but when optimized and framed as a two-player zero-sum perfect-information game, we can see properties of debate and synergies with machine learning that may make it a powerful truth seeking process on the path to beneficial AGI.
On today's episode, we are joined by Geoffrey Irving. Geoffrey is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille.
Topics discussed in this episode include:
-What debate is and how it works
-Experiments on debate in both machine learning and social science
-Optimism and pessimism about debate
-What amplification is and how it fits in
-How Geoffrey took inspiration from amplification and AlphaGo
-The importance of interpretability in debate
-How debate works for normative questions
-Why AI safety needs social scientists

Feb 28, 2019 • 52min
Part 2: Anthrax, Agent Orange, and Yellow Rain With Matthew Meselson and Max Tegmark
In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University.
Part Two focuses on three major incidents in the history of biological weapons: the 1979 anthrax outbreak in Russia, the use of Agent Orange and other herbicides in Vietnam, and the Yellow Rain controversy in the early 80s. Dr. Meselson led the investigations into all three and solved some perplexing scientific mysteries along the way.


