AXRP - the AI X-risk Research Podcast

12 - AI Existential Risk with Paul Christiano

71 snips
Dec 2, 2021
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 2min
2
Si, Human Potential in Expectation
01:44 • 2min
3
How to Delegate Cognitive Work in a System?
03:24 • 3min
4
Ai Systems
05:57 • 3min
5
Are There Any Risks in the Future?
08:39 • 2min
6
Are There Risks That Are Ai Risks?
10:33 • 3min
7
Is Ai Progress Going to Be Really Rapid?
13:44 • 4min
8
The Time Scale Is Measured More Like Years Than a Decade
17:19 • 2min
9
Is That Akin to a Dogenous Growth Story?
19:08 • 2min
10
What's the Like, Key Difference Between Humans and Ai Systems?
20:53 • 4min
11
When We'll Get Very Capable AI to Change Gears a Little Bit?
24:38 • 4min
12
Is There a Complimentary Empirical Question?
28:27 • 4min
13
Is There a Road Map for a Line D K?
32:22 • 2min
14
Ai Progress
34:52 • 4min
15
Is There an Economy of Scale in Training Machine Learning Systems?
38:45 • 3min
16
Is All the Eyes on Ai Technology Changing the Story?
41:32 • 4min
17
Is There a Secret Source to Intelligence?
45:43 • 3min
18
Is the Rate of Progress in Softras Getting Better?
49:00 • 2min
19
Are Your Doing Well or Poorly?
51:09 • 3min
20
Is There a Neural Network?
53:49 • 3min
21
Scaling Up
56:20 • 6min
22
How to Measure the Rate of Progress in Computer Science?
01:02:10 • 2min
23
Do You Think Cove 19 Has Changed Your Beliefs?
01:03:49 • 4min
24
What's the Difference Between Super Intelligence and Superintelligence?
01:08:00 • 2min
25
Is There a Definitive Strategic Advantage?
01:10:10 • 4min
26
What Technical Problems Could Cause Axential Risk?
01:14:02 • 5min
27
Do You Really Want to Copy Paste a Strawberry?
01:19:21 • 3min
28
Carve Up Research Problems Differently From Sources of Doom
01:22:18 • 2min
29
Intent Alignment
01:24:15 • 4min
30
Optimizing for Doom?
01:27:57 • 3min
31
Is There a Human Decomposition?
01:30:47 • 2min
32
Is There a Decomposition of Intent Alignment?
01:33:15 • 2min
33
Is It a Ranking Over Policies?
01:34:58 • 3min
34
Is There a Difference Between Outer and Inner Alignment?
01:37:33 • 3min
35
The Right Way to Specialize on a Sub Problem
01:40:28 • 2min
36
Is There a Difference in Methodological Approaches to Machine Learning?
01:42:55 • 3min
37
Are You Like, Ooh, Here's What I Want?
01:45:31 • 2min
38
Is It the Right Thing to Do in Computer Science?
01:47:12 • 4min
39
How Much Do You Rely on Hindsight to Evaluate Behaviour?
01:51:31 • 4min
40
Is There a Way to Improve Imitative Reenforcement Learning?
01:55:04 • 4min
41
Are You Interested in Interpretability?
01:59:25 • 2min
42
How Does It Help in Outer Linear Type Settings?
02:01:06 • 5min
43
The Problem Is That You Can't Predict Humans
02:06:08 • 2min
44
How to Do a Deepeneral Net Work?
02:08:16 • 3min
45
How to Train a System Like This?
02:11:20 • 3min
46
Using a Whiteboard to Write Code
02:14:40 • 2min
47
The Factored Cognition Approach to a Learning Alignment
02:16:12 • 5min
48
How to Optimize a Reference Manual
02:21:28 • 3min
49
Ai Systems
02:24:45 • 2min
50
The Basic Ingredient That Doesn't Sound Sufficient
02:26:36 • 3min
51
Is There a Need for Decoupling?
02:29:44 • 2min
52
The Basic Organizing Framework for Doom
02:31:16 • 3min
53
How to Write a Simple Pseudo Programmable Programmable Object
02:34:00 • 2min
54
What Are the Biggest Disagreements?
02:36:07 • 4min
55
Is There a Coupling Between Deep Learning and Machine Learning?
02:40:14 • 3min
56
What Are Your Most Important Uncertainties?
02:43:04 • 3min
57
Is There Anything That I Shouldn't Have
02:45:39 • 4min