
Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Latest episodes

17 snips
Jun 22, 2023 • 2h 24min
Joe Carlsmith on How We Change Our Minds About AI Risk
Joe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon. You can read more about Joe's work at https://joecarlsmith.com.
Timestamps:
00:00 Predictable updating on AI risk
07:27 Abstract models versus gut feelings
22:06 How Joe began believing in AI risk
29:06 Is AI risk falsifiable?
35:39 Types of skepticisms about AI risk
44:51 Are we fundamentally confused?
53:35 Becoming alienated from ourselves?
1:00:12 What will change people's minds?
1:12:34 Outline of different futures
1:20:43 Humanity losing touch with reality
1:27:14 Can we understand AI sentience?
1:36:31 Distinguishing real from fake sentience
1:39:54 AI doomer epistemology
1:45:23 AI benchmarks versus real-world AI
1:53:00 AI improving AI research and development
2:01:08 What if transformative AI comes soon?
2:07:21 AI safety if transformative AI comes soon
2:16:52 AI systems interpreting other AI systems
2:19:38 Philosophy and transformative AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Jun 8, 2023 • 2h 27min
Dan Hendrycks on Why Evolution Favors AIs over Humans
Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely. You can read more about Dan's work at https://www.safe.ai
Timestamps:
00:00 Corporate AI race
06:28 Evolutionary dynamics in AI
25:26 Why evolution applies to AI
50:58 Deceptive AI
1:06:04 Competition erodes safety
10:17:40 Evolutionary fitness: humans versus AI
1:26:32 Different paradigms of AI risk
1:42:57 Interpreting AI systems
1:58:03 Honest AI and uncertain AI
2:06:52 Empirical and conceptual work
2:12:16 Losing touch with reality
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

10 snips
May 26, 2023 • 1h 42min
Roman Yampolskiy on Objections to AI Safety
Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/
Timestamps:
00:00 Objections to AI safety
15:06 Will robots make AI risks salient?
27:51 Was early AI safety research useful?
37:28 Impossibility results for AI
47:25 How much risk should we accept?
1:01:21 Exponential or S-curve?
1:12:27 Will AI accidents increase?
1:23:56 Will we know who was right about AI?
1:33:33 Difference between AI output and AI model
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

24 snips
May 11, 2023 • 1h 7min
Nathan Labenz on How AI Will Transform the Economy
Nathan Labenz joins the podcast to discuss the economic effects of AI on growth, productivity, and employment. We also talk about whether AI might have catastrophic effects on the world. You can read more about Nathan's work at https://www.cognitiverevolution.ai
Timestamps:
00:00 Economic transformation from AI
11:15 Productivity increases from technology
17:44 AI effects on employment
28:43 Life without jobs
38:42 Losing contact with reality
42:31 Catastrophic risks from AI
53:52 Scaling AI training runs
1:02:39 Stable opinions on AI?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

11 snips
May 4, 2023 • 60min
Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI
Nathan Labenz joins the podcast to discuss the cognitive revolution, his experience red teaming GPT-4, and the potential near-term dangers of AI. You can read more about Nathan's work at
https://www.cognitiverevolution.ai
Timestamps:
00:00 The cognitive revolution
07:47 Red teaming GPT-4
24:00 Coming to believe in transformative AI
30:14 Is AI depth or breadth most impressive?
42:52 Potential near-term dangers from AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Apr 27, 2023 • 1h 18min
Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology
Maryanna Saenko joins the podcast to discuss how venture capital works, how to fund innovation, and what the fields of investing and philanthropy could learn from each other. You can read more about Maryanna's work at https://future.ventures
Timestamps:
00:00 How does venture capital work?
09:01 Failure and success for startups
13:22 Is overconfidence necessary?
19:20 Repeat entrepreneurs
24:38 Long-term investing
30:36 Feedback loops from investments
35:05 Timing investments
38:35 The hardware-software dichotomy
42:19 Innovation prizes
45:43 VC lessons for philanthropy
51:03 Creating new markets
54:01 Investing versus philanthropy
56:14 Technology preying on human frailty
1:00:55 Are good ideas getting harder to find?
1:06:17 Artificial intelligence
1:12:41 Funding ethics research
1:14:25 Is philosophy useful?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

20 snips
Apr 20, 2023 • 52min
Connor Leahy on the State of AI and Alignment Research
Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.dev
Timestamps:
00:00 Landscape of AI research labs
10:13 Is AGI a useful term?
13:31 AI predictions
17:56 Reinforcement learning from human feedback
29:53 Mechanistic interpretability
33:37 Yudkowsky and Christiano
41:39 Cognitive Emulations
43:11 Public reactions to AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

55 snips
Apr 13, 2023 • 1h 37min
Connor Leahy on AGI and Cognitive Emulation
Connor Leahy joins the podcast to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.dev
Timestamps:
00:00 GPT-4
16:35 "Magic" in machine learning
27:43 Cognitive emulations
38:00 Machine learning VS explainability
48:00 Human data = human AI?
1:00:07 Analogies for cognitive emulations
1:26:03 Demand for human-like AI
1:31:50 Aligning superintelligence
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

6 snips
Apr 6, 2023 • 50min
Lennart Heim on Compute Governance
Lennart Heim joins the podcast to discuss options for governing the compute used by AI labs and potential problems with this approach to AI safety. You can read more about Lennart's work here: https://heim.xyz/about/
Timestamps:
00:00 Introduction
00:37 AI risk
03:33 Why focus on compute?
11:27 Monitoring compute
20:30 Restricting compute
26:54 Subsidising compute
34:00 Compute as a bottleneck
38:41 US and China
42:14 Unintended consequences
46:50 Will AI be like nuclear energy?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

14 snips
Mar 30, 2023 • 48min
Lennart Heim on the AI Triad: Compute, Data, and Algorithms
Lennart Heim joins the podcast to discuss how we can forecast AI progress by researching AI hardware. You can read more about Lennart's work here: https://heim.xyz/about/
Timestamps:
00:00 Introduction
01:00 The AI triad
06:26 Modern chip production
15:54 Forecasting AI with compute
27:18 Running out of data?
32:37 Three eras of AI training
37:58 Next chip paradigm
44:21 AI takeoff speeds
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/