Generally Intelligent

Dylan Hadfield-Menell, UC Berkeley/MIT: The value alignment problem in AI

May 12, 2021
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 2min
2
Controlling Da I Systems in a Robotic Environment
02:22 • 2min
3
Is There a Subjective Versus Objective Goal in Machine Learning?
04:41 • 3min
4
Is This a Principal Agent Problem in Artificial Intelligence?
08:01 • 4min
5
Is There a Loop of System Behavior?
12:07 • 2min
6
How to Build Better Signals of What You Value
14:04 • 5min
7
Optimizing Agents - What's the Biggest Problem?
18:50 • 3min
8
What Would You Change About the Stock Market?
22:08 • 5min
9
Up Dating Ai Systems in a Production Setting
27:19 • 3min
10
I Think There's a Lot of Intelligence in How We Manage the Optimization Component of Ourselves
30:36 • 2min
11
Is Machine Learning a Good Idea?
32:20 • 6min
12
Creating a Data Set That Really Gets You There?
37:58 • 4min
13
How Do We Communicate in Groups Effectively?
42:02 • 6min
14
How Do You Filter Resumes?
47:45 • 4min
15
How Do You Evaluate Your Processes?
52:15 • 3min
16
The General Purpose Programming Language in Machine Learning?
55:31 • 4min
17
Is It Possible to Measure Temperature?
59:04 • 3min
18
Is There a Market in Efficiency?
01:02:29 • 2min
19
How to Scale Control of a Global Internet Platform
01:04:09 • 3min
20
Getting People to Think About Their Values and Values
01:06:57 • 3min
21
Delegated Recommendations
01:10:16 • 2min
22
Machine Learning and Information Diets
01:11:56 • 2min
23
Do You Really Need a Predictive Model of How Things Will Change?
01:13:45 • 3min
24
The Incomplete Contracting Line of Work in Machine Learning
01:16:23 • 2min
25
How Unsupervised Learning Is Progressing?
01:18:33 • 3min
26
Unsupervised Learning and Manipulation
01:21:42 • 4min
27
Imitative Learning
01:25:43 • 4min
28
Is There Anything That You Want to Put Out There?
01:29:30 • 3min