

Ep. 129: Applying the 'security mindset' to AI and x-risk | Jeffrey Ladish
Apr 11, 2023
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Introduction
00:00 • 2min
How to Survive an Interesting Transition
02:19 • 2min
The Impact of Artificial Intelligence on Climate Change
04:09 • 6min
The Future of AI
09:45 • 3min
The Discontinuities of GPT-3
12:40 • 2min
The Future of Agents
14:48 • 4min
The Importance of Interpretability in the Evolution of Agency
18:49 • 2min
The Risk of Scaling a System That's That Big
20:58 • 6min
The Risks of Having a GPT Three Level Model
26:42 • 6min
The Importance of Embodiment in Artificial Intelligence
32:17 • 4min
The Importance of a Hope for AI
35:50 • 3min
GPT for Is Already Superhuman at Next Token Prediction
38:38 • 2min
How to Build Super Intelligence
40:17 • 5min
The Discourse Around Safety
45:45 • 2min
The Future of Alignment Research
48:14 • 4min
Alignment Fundamentals and Interpretability
52:00 • 3min
The Discontinuities of Public Speaking
54:30 • 3min
The Problem of Alignment
57:03 • 4min
The Importance of Recursive Self Improvement
01:00:40 • 2min
The Future of Technology
01:02:23 • 3min