Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Introduction
00:00 • 6min
Token Predictors Capture Intelligence
05:36 • 2min
The Anomaly of Language Models
07:23 • 3min
The Future of Machine Learning
10:04 • 3min
The Importance of a Simple Feed-Forward Network
12:49 • 3min
The Future of Machine Learning
15:52 • 2min
The Immature Field of Machine Learning
18:10 • 2min
Chinchilla: A New Focus on Data Efficiency
20:20 • 2min
Chinchilla Scaling Laws
22:28 • 2min
The Limits of Human Intelligence
23:58 • 2min
What Constitutes an Explosive Investment in Machine Learning Hardware?
26:26 • 5min
Nvidia's Data Center Revenue in Q2 Financial Year 23
31:39 • 2min
Nvidia High-End Gaming GPU FP32 Flops
34:06 • 4min
NVIDIA Is Pumping Out 3 GPT-3s Every Single Day
38:16 • 1min
The Physical Limits of Hardware Computing
39:40 • 5min
The Impossibility of Kumi's Law to Continue
44:26 • 2min
Kumi's Law and the Future of Machine Learning
46:56 • 2min
The End of Computational Scaling
48:28 • 3min
The Implications of Hardware Advancements for AI
51:27 • 2min
The Future of Artificial Intelligence
53:25 • 4min
The Future of AI
57:22 • 3min
The Future of Machine Learning
59:57 • 4min
Probability of Doom for AGI Development at Different Dates
01:03:51 • 5min
The Importance of Coherence in AGI Architectures
01:08:43 • 3min
The Importance of Optimism
01:11:18 • 2min
The Probability of Doom, Given AGI by Date, Slopes Downwards
01:12:59 • 2min
The Importance of a Background Vibe of Normalcy
01:14:41 • 3min
The Slowdown in Moore's Law
01:18:09 • 2min
The Risks of AGI
01:20:17 • 2min
The Transition From Narrow AI to Dangerous AI
01:21:53 • 4min


