Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Introduction
00:00 • 5min
How to Scale Out Efficiency in Deep Learning?
05:07 • 2min
Using 100% of the Data, You're Not Losing That Much
06:53 • 2min
Pre-Trained Models - Can I Apply This to Any Kind of Domain?
08:24 • 2min
The High Level Takeaway for Efficiency Methods
10:41 • 3min
Is There a Trade Off Between Expert and Mixed Approach?
13:49 • 2min
How Do You Reduce the Efficiency of Your Model?
15:43 • 2min
Pre-Training and Fine Tuning in NLP
17:14 • 2min
Using Labeled Examples to Fine Tune Your Model
19:10 • 3min
The Challenge of Fine Tuning GPUs
21:49 • 2min
Evaluating Models Like GPD-3
23:24 • 2min
NLP Models - What Are the First Two to Three Things You Would Try?
25:09 • 2min
NLP
27:04 • 1min
NLP, Data Set Creators and Data Set Engineers
28:34 • 2min
80% of Energy Costs Are on the Inference Side, Right?
30:42 • 2min
How Much Interest and Awareness Is There in Green AI?
32:57 • 2min
How Much It Costs to Train Large Neural Networks?
35:08 • 2min
Detoxification of Language Models
36:48 • 2min
Is There a Bias in Data Sets?
38:56 • 2min
Is There a Tool That Can Surface Biases?
40:53 • 2min
The Data Exchange Podcast Is a Property of Gradient Flow
43:12 • 2min