LessWrong (Curated & Popular)

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

May 16, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 3min
2
The Importance of STEM-Level AI
03:29 • 4min
3
The Future of STEM-Level AGI
07:00 • 2min
4
The Importance of Human Survival in STEM-Level AGI
09:16 • 3min
5
The Importance of Goal-Oriented Systems in AI
11:52 • 3min
6
The Importance of Goal-Oriented Behavior
15:09 • 3min
7
The Evolution of Goal-Oriented Systems
17:47 • 2min
8
The Importance of Goals in AI
19:27 • 4min
9
The Basic Reasons I Expect AGI Ruin
23:30 • 3min
10
How to Build AI Systems That Exhibit Dangerous Means and Reasoning
26:18 • 2min
11
The Difficulty of AI Alignment
28:34 • 2min
12
How to Avert Instrumental Pressures
31:01 • 2min
13
The Challenge of Aligning STEM Level AGI Systems to Share Humane Values
33:27 • 2min
14
The Importance of Pivotal Act Loading
35:52 • 4min
15
The Evolution of AI
39:46 • 4min
16
AGI-ruin a List of Lethality's
44:02 • 2min
17
The Problems With AGI Proliferation
45:49 • 2min
18
The Central AI Alignment Problem
48:17 • 2min
19
The Arguments for 2D and 2B
50:46 • 3min
20
The Bottleneck on Decisive Strategic Advantages Reachable
53:22 • 3min
21
The Probability of AGI Ruin
56:04 • 2min
22
The Importance of Premises 1 to 3
58:20 • 2min
23
Artificially Structured Argument for Expecting AGI Ruin
01:00:46 • 2min