The Inside View

Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI

Jul 16, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 5min
2
The Negative Effects of Elie's Posts on Optimism
04:49 • 2min
3
The Probability of Dying From AI Extinction
06:35 • 2min
4
The Doom of Open AI
08:29 • 2min
5
The Future of AI
10:29 • 2min
6
The Different Classes of Counter Arguments
12:43 • 2min
7
The Cost of a Gaming GPU
14:28 • 2min
8
How Much Compute Does the Human Brain Need?
16:28 • 5min
9
AGI's Best Guess: 10 to the 29 in 2025
21:25 • 2min
10
The Lifetime Anchor and the Evolution of Human Level Algorithms
23:48 • 2min
11
The Future of Timeline Prediction
25:31 • 3min
12
The Complications of Timelines
28:44 • 2min
13
The Dangers of Chatbot Technology
30:46 • 3min
14
The Slowdown of Progress
33:16 • 2min
15
The Future of Intelligence
35:28 • 4min
16
The Future of AI
39:00 • 2min
17
The Argument for Rapid Self Improvement in AI
40:51 • 2min
18
The Future of AI
42:22 • 3min
19
The Race to Build a Deep Mind Cluster
45:22 • 2min
20
The Pros and Cons of Open Source Research and Open Source Software
47:45 • 3min
21
The Importance of Sharing Models in AI
50:47 • 2min
22
The Concerns About Open Source Models
52:30 • 3min
23
How to Train a Language Model With a Limited Context Window
55:04 • 5min
24
How to Solve Alignment Problems With a Language Model
59:52 • 2min
25
The Luther Approach to Language Model Development
01:01:59 • 3min
26
How to Get Started With a No-to-Lingual Project
01:05:09 • 2min
27
The Importance of Alignment Mine Test Projects
01:07:38 • 4min
28
How to Train a Mind Tester to Punch Trees
01:11:39 • 4min
29
The Next Step in Model Based RL
01:15:26 • 2min
30
Building a Gym Environment for Agents
01:17:07 • 2min
31
Scaling Up a Neural Network to Improve Interpretability
01:19:21 • 2min
32
How to Make Models More Likely to Be Corrected by Humans
01:21:25 • 3min
33
The Off Switch Game
01:24:40 • 2min
34
How to Fix Failure Modes in a Minecraft Environment
01:27:02 • 3min