Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11
Introduction
00:00 • 2min
What Do You Mean When You Say Serverless GPUs?
02:30 • 5min
Cold Boot - Is There a Constraint Model?
07:18 • 2min
Serverless Workflows
09:22 • 5min
ML Model Inference - What's the Difference?
14:22 • 2min
The Change Log Network - What Does It Look Like for You?
16:17 • 4min
Is the Primary Use Case a Client Integration?
20:33 • 4min
Inference Side Training
24:51 • 4min
Nexa
28:35 • 1min
Is There a Future for Serverless?
29:56 • 5min
Where's the Puck Going?
35:23 • 3min