Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Introduction
00:00 • 2min
The Nanny Rails of Open AI
02:30 • 2min
The Problem With Facebook's Open AI API
04:24 • 2min
Open Source Replications of Language Models
06:22 • 3min
The Evolution of Open Source Modeling
08:56 • 2min
The Future of Programming
11:14 • 2min
The Future of Programming Languages
13:07 • 2min
The Open AI Plugin Architecture
14:58 • 2min
The State of Open Source Generative Models
16:46 • 3min
The History of GPUs and Parallel Processing
19:50 • 2min
The History of Deep Learning
22:01 • 2min
The Future of Deep Learning
24:19 • 4min
How to Squeeze Goodness and Usability Out of Less Total Computing
28:22 • 5min
The Evolution of Stability AI
33:16 • 2min
The Effect of Order of Training on Model Memorization
35:15 • 2min
The Importance of Scaling Up Large Language Models
37:02 • 2min
The Dan Jailbreak: A Joint Project From Stability and Illuther
38:52 • 3min
The Future of Deep Learning
42:00 • 3min
How to Make a Pre-Drink
45:06 • 2min
How to Create Richly Labeled Data
47:05 • 2min
How to Make Money With Chat GPD
48:40 • 3min
How to Scale a Scaling Suite for Sufficient Performance
51:28 • 4min
The Future of Open AI
55:21 • 3min
GPT Five: Too Dangerous to Exist
58:27 • 2min
The Politics of Political Bias
01:00:03 • 2min
How to Open Source Human Reinforced Learning Data
01:01:35 • 4min
How to Be a Great Prompt Engineer
01:05:42 • 2min