LessWrong (Curated & Popular)

"AGI Ruin: A List of Lethalities" by Eliezer Yudkowsky

Jun 20, 2022
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 3min
2
The Biggest Challenge From a G I Alignment
03:25 • 2min
3
The Lethal Problem of a G I Alignment
05:14 • 4min
4
How a Super Intelligence Could Kill Everybody on Earth
08:54 • 3min
5
The Real Lethalness Comes From the First Critical Try
11:59 • 2min
6
Facebook Ai Research
13:38 • 3min
7
How Dare You Propose to Burn All Gpus?
16:24 • 2min
8
The Best and Easiest Found by Optimization
18:43 • 3min
9
A Machine Learning Paradime Isn't Enough to Train Alignment
21:22 • 3min
10
How to Predict the Alignment Problems of Super Intelligence
24:15 • 3min
11
How to Train by Gradient Decadence in a Toy Domain
27:05 • 3min
12
The Super Problem of Outer Optimization Doesn't Lead to Inner Alignment
29:57 • 3min
13
Using the Paradime of Loss Functions to Optimize a Cognitive System Will Kill You
32:58 • 3min
14
The Best Predictive Explanation for Human Operators
35:49 • 3min
15
How to Find a Simple Corrigible Alignment
38:47 • 4min
16
Is the a G I Planning to Kill Us?
42:32 • 2min
17
The a I Is a Weaker Intelligence Than Humans
44:32 • 3min
18
How to Make a Super Intelligence
47:17 • 3min
19
The New Cynical Old Veteran
50:05 • 3min
20
The Field of a I Safety Is Not Making Real Progress
52:36 • 3min
21
The Importance of Having the Ability to Write a Document
55:10 • 3min
22
What a Surviving World Looks Like?
57:43 • 4min