Hear This Idea

Bonus: Preventing an AI-Related Catastrophe

Feb 24, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 3min
2
Working on Reducing Risks From AI
02:57 • 3min
3
Artificial Intelligence (AI) Risks
05:58 • 4min
4
Is There a Risk of an Existential Catastrophe in AI?
10:09 • 2min
5
The Future of Artificial Intelligence
11:58 • 4min
6
How Much Compute Is Needed to Train the Largest Machine Learning Models?
16:25 • 2min
7
Artificial Intelligence Scaling Hypothesis
18:08 • 2min
8
The Scaling Hypothesis of Transformative AI
19:44 • 4min
9
Advanced AI Systems Could Pose a Threat to Humanity
24:04 • 4min
10
Planning in the Real World
27:51 • 2min
11
Planning Systems - A Short Footnote
29:59 • 3min
12
Advanced Planning AI Systems Could Easily Be Misaligned
32:36 • 3min
13
Artificial Intelligence and Misalignment
35:13 • 3min
14
Advanced AI Systems Could Be Dangerously Misaligned
38:08 • 3min
15
A Planned AI System That Doesn't Seek Power
41:03 • 3min
16
How to Control the Objectives of an AI System
43:57 • 3min
17
The Second Requirement for Any Strategy to Work
46:35 • 4min
18
Are We Concerned About Existential Catastrophes?
50:13 • 1min
19
Why People Might Deploy Misaligned AI
51:42 • 4min
20
What Could an Existential Catastrophe Caused by AI Really Look Like?
55:17 • 2min
21
A Power-Seeking AI Could Take Power
57:14 • 3min
22
How Advanced Planning AI Systems Could Change Our Lives
01:00:20 • 4min
23
Technique Number 6: Developing Advanced Technology
01:03:54 • 3min
24
How Could an AI Takeover Cause an Existential Catastrophe?
01:06:54 • 3min
25
Artificial Intelligence - What Failure Looks Like?
01:10:23 • 3min
26
The Second Story, Existential Catastrophe Through AI Development
01:13:30 • 3min
27
Automate Automation - Numbers 5 and 6
01:16:48 • 2min
28
AI-Caused Existential Catastrophe
01:18:21 • 4min
29
Heading AI Could Worse War
01:22:29 • 4min
30
Dot Point, AI-Related Catastrophes
01:26:03 • 3min
31
Are There Any Estimates of Existential Risks From AI?
01:29:27 • 4min
32
Working on Technical AI Safety
01:33:31 • 2min
33
How to Reduce Existential Risks From Artificial Intelligence
01:35:25 • 3min
34
Why We Shouldn't Build Transformative AI
01:38:06 • 3min
35
AI Development
01:40:45 • 4min
36
AI Alignment - The End of the Dot Points
01:44:19 • 2min
37
AI Safety
01:46:17 • 3min
38
Arguments Against Working on AI Risk
01:49:41 • 3min
39
Artificial General Intelligence
01:52:31 • 4min
40
How Can We Sandbox a Dangerous AI System?
01:57:00 • 4min
41
Why We Should Not Give AI Systems Bad Goals
02:00:31 • 3min
42
Can AI Really Do a Lot of Good?
02:03:14 • 4min
43
The Importance of Monetary Incentives for AI Systems
02:06:57 • 5min
44
The End of the Dot Points
02:11:43 • 4min
45
Working on Reducing Risks From AI Is a Good Idea
02:15:58 • 4min
46
How to Make Future AI Systems Safe
02:19:58 • 3min
47
List of Conceptual AI Safety Labs
02:23:04 • 3min
48
Technical AI Safety Research
02:25:54 • 3min
49
AI Impacts - Research and Policy
02:28:41 • 3min
50
Career Review of AI Strategy and Policy Careers
02:31:44 • 3min
51
Risks From AI - A New Career Path?
02:34:50 • 5min