
LessWrong (Curated & Popular) "An artificially structured argument for expecting AGI ruin" by Rob Bensinger
May 16, 2023
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Introduction
00:00 • 3min
The Importance of STEM-Level AI
03:29 • 4min
The Future of STEM-Level AGI
07:00 • 2min
The Importance of Human Survival in STEM-Level AGI
09:16 • 3min
The Importance of Goal-Oriented Systems in AI
11:52 • 3min
The Importance of Goal-Oriented Behavior
15:09 • 3min
The Evolution of Goal-Oriented Systems
17:47 • 2min
The Importance of Goals in AI
19:27 • 4min
The Basic Reasons I Expect AGI Ruin
23:30 • 3min
How to Build AI Systems That Exhibit Dangerous Means and Reasoning
26:18 • 2min
The Difficulty of AI Alignment
28:34 • 2min
How to Avert Instrumental Pressures
31:01 • 2min
The Challenge of Aligning STEM Level AGI Systems to Share Humane Values
33:27 • 2min
The Importance of Pivotal Act Loading
35:52 • 4min
The Evolution of AI
39:46 • 4min
AGI-ruin a List of Lethality's
44:02 • 2min
The Problems With AGI Proliferation
45:49 • 2min
The Central AI Alignment Problem
48:17 • 2min
The Arguments for 2D and 2B
50:46 • 3min
The Bottleneck on Decisive Strategic Advantages Reachable
53:22 • 3min
The Probability of AGI Ruin
56:04 • 2min
The Importance of Premises 1 to 3
58:20 • 2min
Artificially Structured Argument for Expecting AGI Ruin
01:00:46 • 2min
