The Data Exchange with Ben Lorica

Efficient Methods for Natural Language Processing

Dec 1, 2022
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 5min
2
How to Scale Out Efficiency in Deep Learning?
05:07 • 2min
3
Using 100% of the Data, You're Not Losing That Much
06:53 • 2min
4
Pre-Trained Models - Can I Apply This to Any Kind of Domain?
08:24 • 2min
5
The High Level Takeaway for Efficiency Methods
10:41 • 3min
6
Is There a Trade Off Between Expert and Mixed Approach?
13:49 • 2min
7
How Do You Reduce the Efficiency of Your Model?
15:43 • 2min
8
Pre-Training and Fine Tuning in NLP
17:14 • 2min
9
Using Labeled Examples to Fine Tune Your Model
19:10 • 3min
10
The Challenge of Fine Tuning GPUs
21:49 • 2min
11
Evaluating Models Like GPD-3
23:24 • 2min
12
NLP Models - What Are the First Two to Three Things You Would Try?
25:09 • 2min
13
NLP
27:04 • 1min
14
NLP, Data Set Creators and Data Set Engineers
28:34 • 2min
15
80% of Energy Costs Are on the Inference Side, Right?
30:42 • 2min
16
How Much Interest and Awareness Is There in Green AI?
32:57 • 2min
17
How Much It Costs to Train Large Neural Networks?
35:08 • 2min
18
Detoxification of Language Models
36:48 • 2min
19
Is There a Bias in Data Sets?
38:56 • 2min
20
Is There a Tool That Can Surface Biases?
40:53 • 2min
21
The Data Exchange Podcast Is a Property of Gradient Flow
43:12 • 2min