

Future of Life Institute Podcast
Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
Mentioned books

Mar 30, 2023 • 48min
Lennart Heim on the AI Triad: Compute, Data, and Algorithms
Lennart Heim joins the podcast to discuss how we can forecast AI progress by researching AI hardware. You can read more about Lennart's work here: https://heim.xyz/about/
Timestamps:
00:00 Introduction
01:00 The AI triad
06:26 Modern chip production
15:54 Forecasting AI with compute
27:18 Running out of data?
32:37 Three eras of AI training
37:58 Next chip paradigm
44:21 AI takeoff speeds
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Mar 23, 2023 • 52min
Liv Boeree on Poker, GPT-4, and the Future of AI
Liv Boeree joins the podcast to discuss poker, GPT-4, human-AI interaction, whether this is the most important century, and building a dataset of human wisdom. You can read more about Liv's work here: https://livboeree.com
Timestamps:
00:00 Introduction
00:36 AI in Poker
09:35 Game-playing AI
13:45 GPT-4 and generative AI
26:41 Human-AI interaction
32:05 AI arms race risks
39:32 Most important century?
42:36 Diminishing returns to intelligence?
49:14 Dataset of human wisdom/meaning
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Mar 16, 2023 • 42min
Liv Boeree on Moloch, Beauty Filters, Game Theory, Institutions, and AI
Liv Boeree joins the podcast to discuss Moloch, beauty filters, game theory, institutional change, and artificial intelligence. You can read more about Liv's work here: https://livboeree.com
Timestamps:
00:00 Introduction
01:57 What is Moloch?
04:13 Beauty filters
10:06 Science citations
15:18 Resisting Moloch
20:51 New institutions
26:02 Moloch and WinWin
28:41 Changing systems
33:37 Artificial intelligence
39:14 AI acceleration
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Mar 9, 2023 • 43min
Tobias Baumann on Space Colonization and Cooperative Artificial Intelligence
Tobias Baumann joins the podcast to discuss suffering risks, space colonization, and cooperative artificial intelligence. You can read more about Tobias' work here: https://centerforreducingsuffering.org.
Timestamps:
00:00 Suffering risks
02:50 Space colonization
10:12 Moral circle expansion
19:14 Cooperative artificial intelligence
36:19 Influencing governments
39:34 Can we reduce suffering?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Mar 2, 2023 • 47min
Tobias Baumann on Artificial Sentience and Reducing the Risk of Astronomical Suffering
Tobias Baumann joins the podcast to discuss suffering risks, artificial sentience, and the problem of knowing which actions reduce suffering in the long-term future. You can read more about Tobias' work here: https://centerforreducingsuffering.org.
Timestamps:
00:00 Introduction
00:52 What are suffering risks?
05:40 Artificial sentience
17:18 Is reducing suffering hopelessly difficult?
26:06 Can we know how to reduce suffering?
31:17 Why are suffering risks neglected?
37:31 How do we avoid accidentally increasing suffering?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Feb 23, 2023 • 35min
Neel Nanda on Math, Tech Progress, Aging, Living up to Our Values, and Generative AI
Neel Nanda joins the podcast for a lightning round on mathematics, technological progress, aging, living up to our values, and generative AI. You can find his blog here: https://www.neelnanda.io
Timestamps:
00:00 Introduction
00:55 How useful is advanced mathematics?
02:24 Will AI replace mathematicians?
03:28 What are the key drivers of tech progress?
04:13 What scientific discovery would disrupt Neel's worldview?
05:59 How should humanity view aging?
08:03 How can we live up to our values?
10:56 What can we learn from a person who lived 1.000 years ago?
12:05 What should we do after we have aligned AGI?
16:19 What important concept is often misunderstood?
17:22 What is the most impressive scientific discovery?
18:08 Are language models better learning tools that textbooks?
21:22 Should settling Mars be a priority for humanity?
22:44 How can we focus on our work?
24:04 Are human-AI relationships morally okay?
25:18 Are there aliens in the universe?
26:02 What are Neel's favourite books?
27:15 What is an overlooked positive aspect of humanity?
28:33 Should people spend more time prepping for disaster?
30:41 Neel's advice for teens.
31:55 How will generative AI evolve over the next five years?
32:56 How much can AIs achieve through a web browser?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Feb 16, 2023 • 1h 2min
Neel Nanda on Avoiding an AI Catastrophe with Mechanistic Interpretability
Neel Nanda joins the podcast to talk about mechanistic interpretability and how it can make AI safer. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io
Timestamps:
00:00 Introduction
00:46 How early is the field mechanistic interpretability?
03:12 Why should we care about mechanistic interpretability?
06:38 What are some successes in mechanistic interpretability?
16:29 How promising is mechanistic interpretability?
31:13 Is machine learning analogous to evolution?
32:58 How does mechanistic interpretability make AI safer?
36:54 36:54 Does mechanistic interpretability help us control AI?
39:57 Will AI models resist interpretation?
43:43 Is mechanistic interpretability fast enough?
54:10 Does mechanistic interpretability give us a general understanding?
57:44 How can you help with mechanistic interpretability?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Feb 9, 2023 • 1h 5min
Neel Nanda on What is Going on Inside Neural Networks
Neel Nanda joins the podcast to explain how we can understand neural networks using mechanistic interpretability. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io
Timestamps:
00:00 Who is Neel?
04:41 How did Neel choose to work on AI safety?
12:57 What does an AI safety researcher do?
15:53 How analogous are digital neural networks to brains?
21:34 Are neural networks like alien beings?
29:13 Can humans think like AIs?
35:00 Can AIs help us discover new physics?
39:56 How advanced is the field of AI safety?
45:56 How did Neel form independent opinions on AI?
48:20 How does AI safety research decrease the risk of extinction?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Feb 2, 2023 • 1h 6min
Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education
Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Jan 26, 2023 • 1h 5min
Connor Leahy on AI Safety and Why the World is Fragile
Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev
Timestamps:
00:00 Introduction
00:47 What is the best way to understand AI safety?
09:50 Why is the world relatively stable?
15:18 Is the main worry human misuse of AI?
22:47 Can humanity solve AI safety?
30:06 Can we slow down AI development?
37:13 How should governments regulate AI?
41:09 How do we avoid misallocating AI safety government grants?
51:02 Should AI safety research be done by for-profit companies?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/


