

Laura Weidinger: Ethical Risks, Harms, and Alignment of Large Language Models
Aug 5, 2022
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Introduction
00:00 • 4min
Is There a Difference Between AI and Cognitive Science?
03:40 • 3min
How Did You Become Interested in AI Ethics?
06:23 • 2min
How Do We Measure How Ethical AI Is?
08:50 • 3min
Foresight, Sandboxing, Participation
12:03 • 2min
Is There a Cultural Shift in AI Research?
14:28 • 3min
The Ethical and Social Risks of Harm From Language Models
17:17 • 3min
DeepMind, What Are the Types of Risks?
19:58 • 4min
The Impact of Language Models in AI Development
24:25 • 3min
How Do We Measure Harmful Text?
27:33 • 5min
What's the Main Insight From This Higher Level Synthesis?
32:45 • 3min
Is the AI Act Going to Be a Live Process?
35:36 • 3min
AI Ethics - You Can't Explain Deep Learning?
38:27 • 3min
Mispecification in Language Models
41:31 • 2min
Is There Any Way to Mimulate Mispecification From Training Data?
43:23 • 3min
GPT 4chan - What Are the Main Harms of Training GPTJ on Hate Speech?
46:07 • 2min
Are You Being Responsible When You Release a Model?
48:29 • 2min
Is 4chan a Hacker's Model?
50:33 • 2min
What Do You Do Outside of Research?
52:30 • 3min