Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
Introduction
00:00 • 3min
Working on Reducing Risks From AI
02:57 • 3min
Artificial Intelligence (AI) Risks
05:58 • 4min
Is There a Risk of an Existential Catastrophe in AI?
10:09 • 2min
The Future of Artificial Intelligence
11:58 • 4min
How Much Compute Is Needed to Train the Largest Machine Learning Models?
16:25 • 2min
Artificial Intelligence Scaling Hypothesis
18:08 • 2min
The Scaling Hypothesis of Transformative AI
19:44 • 4min
Advanced AI Systems Could Pose a Threat to Humanity
24:04 • 4min
Planning in the Real World
27:51 • 2min
Planning Systems - A Short Footnote
29:59 • 3min
Advanced Planning AI Systems Could Easily Be Misaligned
32:36 • 3min
Artificial Intelligence and Misalignment
35:13 • 3min
Advanced AI Systems Could Be Dangerously Misaligned
38:08 • 3min
A Planned AI System That Doesn't Seek Power
41:03 • 3min
How to Control the Objectives of an AI System
43:57 • 3min
The Second Requirement for Any Strategy to Work
46:35 • 4min
Are We Concerned About Existential Catastrophes?
50:13 • 1min
Why People Might Deploy Misaligned AI
51:42 • 4min
What Could an Existential Catastrophe Caused by AI Really Look Like?
55:17 • 2min
A Power-Seeking AI Could Take Power
57:14 • 3min
How Advanced Planning AI Systems Could Change Our Lives
01:00:20 • 4min
Technique Number 6: Developing Advanced Technology
01:03:54 • 3min
How Could an AI Takeover Cause an Existential Catastrophe?
01:06:54 • 3min
Artificial Intelligence - What Failure Looks Like?
01:10:23 • 3min
The Second Story, Existential Catastrophe Through AI Development
01:13:30 • 3min
Automate Automation - Numbers 5 and 6
01:16:48 • 2min
AI-Caused Existential Catastrophe
01:18:21 • 4min
Heading AI Could Worse War
01:22:29 • 4min
Dot Point, AI-Related Catastrophes
01:26:03 • 3min
Are There Any Estimates of Existential Risks From AI?
01:29:27 • 4min
Working on Technical AI Safety
01:33:31 • 2min
How to Reduce Existential Risks From Artificial Intelligence
01:35:25 • 3min
Why We Shouldn't Build Transformative AI
01:38:06 • 3min
AI Development
01:40:45 • 4min
AI Alignment - The End of the Dot Points
01:44:19 • 2min
AI Safety
01:46:17 • 3min
Arguments Against Working on AI Risk
01:49:41 • 3min
Artificial General Intelligence
01:52:31 • 4min
How Can We Sandbox a Dangerous AI System?
01:57:00 • 4min
Why We Should Not Give AI Systems Bad Goals
02:00:31 • 3min
Can AI Really Do a Lot of Good?
02:03:14 • 4min
The Importance of Monetary Incentives for AI Systems
02:06:57 • 5min
The End of the Dot Points
02:11:43 • 4min
Working on Reducing Risks From AI Is a Good Idea
02:15:58 • 4min
How to Make Future AI Systems Safe
02:19:58 • 3min
List of Conceptual AI Safety Labs
02:23:04 • 3min
Technical AI Safety Research
02:25:54 • 3min
AI Impacts - Research and Policy
02:28:41 • 3min
Career Review of AI Strategy and Policy Careers
02:31:44 • 3min
Risks From AI - A New Career Path?
02:34:50 • 5min