

12 - AI Existential Risk with Paul Christiano
71 snips Dec 2, 2021
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57
Introduction
00:00 • 2min
Si, Human Potential in Expectation
01:44 • 2min
How to Delegate Cognitive Work in a System?
03:24 • 3min
Ai Systems
05:57 • 3min
Are There Any Risks in the Future?
08:39 • 2min
Are There Risks That Are Ai Risks?
10:33 • 3min
Is Ai Progress Going to Be Really Rapid?
13:44 • 4min
The Time Scale Is Measured More Like Years Than a Decade
17:19 • 2min
Is That Akin to a Dogenous Growth Story?
19:08 • 2min
What's the Like, Key Difference Between Humans and Ai Systems?
20:53 • 4min
When We'll Get Very Capable AI to Change Gears a Little Bit?
24:38 • 4min
Is There a Complimentary Empirical Question?
28:27 • 4min
Is There a Road Map for a Line D K?
32:22 • 2min
Ai Progress
34:52 • 4min
Is There an Economy of Scale in Training Machine Learning Systems?
38:45 • 3min
Is All the Eyes on Ai Technology Changing the Story?
41:32 • 4min
Is There a Secret Source to Intelligence?
45:43 • 3min
Is the Rate of Progress in Softras Getting Better?
49:00 • 2min
Are Your Doing Well or Poorly?
51:09 • 3min
Is There a Neural Network?
53:49 • 3min
Scaling Up
56:20 • 6min
How to Measure the Rate of Progress in Computer Science?
01:02:10 • 2min
Do You Think Cove 19 Has Changed Your Beliefs?
01:03:49 • 4min
What's the Difference Between Super Intelligence and Superintelligence?
01:08:00 • 2min
Is There a Definitive Strategic Advantage?
01:10:10 • 4min
What Technical Problems Could Cause Axential Risk?
01:14:02 • 5min
Do You Really Want to Copy Paste a Strawberry?
01:19:21 • 3min
Carve Up Research Problems Differently From Sources of Doom
01:22:18 • 2min
Intent Alignment
01:24:15 • 4min
Optimizing for Doom?
01:27:57 • 3min
Is There a Human Decomposition?
01:30:47 • 2min
Is There a Decomposition of Intent Alignment?
01:33:15 • 2min
Is It a Ranking Over Policies?
01:34:58 • 3min
Is There a Difference Between Outer and Inner Alignment?
01:37:33 • 3min
The Right Way to Specialize on a Sub Problem
01:40:28 • 2min
Is There a Difference in Methodological Approaches to Machine Learning?
01:42:55 • 3min
Are You Like, Ooh, Here's What I Want?
01:45:31 • 2min
Is It the Right Thing to Do in Computer Science?
01:47:12 • 4min
How Much Do You Rely on Hindsight to Evaluate Behaviour?
01:51:31 • 4min
Is There a Way to Improve Imitative Reenforcement Learning?
01:55:04 • 4min
Are You Interested in Interpretability?
01:59:25 • 2min
How Does It Help in Outer Linear Type Settings?
02:01:06 • 5min
The Problem Is That You Can't Predict Humans
02:06:08 • 2min
How to Do a Deepeneral Net Work?
02:08:16 • 3min
How to Train a System Like This?
02:11:20 • 3min
Using a Whiteboard to Write Code
02:14:40 • 2min
The Factored Cognition Approach to a Learning Alignment
02:16:12 • 5min
How to Optimize a Reference Manual
02:21:28 • 3min
Ai Systems
02:24:45 • 2min
The Basic Ingredient That Doesn't Sound Sufficient
02:26:36 • 3min
Is There a Need for Decoupling?
02:29:44 • 2min
The Basic Organizing Framework for Doom
02:31:16 • 3min
How to Write a Simple Pseudo Programmable Programmable Object
02:34:00 • 2min
What Are the Biggest Disagreements?
02:36:07 • 4min
Is There a Coupling Between Deep Learning and Machine Learning?
02:40:14 • 3min
What Are Your Most Important Uncertainties?
02:43:04 • 3min
Is There Anything That I Shouldn't Have
02:45:39 • 4min