NLP Highlights

117 - Interpreting NLP Model Predictions, with Sameer Singh

4 snips
Aug 13, 2020
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 4min
2
The Different Motivations for Predicting Things for the Right Reason
03:36 • 3min
3
The Trade-Off Between Interpretability and Non-Linear Models
06:30 • 3min
4
The Problems With Linear Models
09:28 • 2min
5
The Challenges of Interpretability in Machine Learning
11:36 • 2min
6
How to Approach Interpretability
13:37 • 2min
7
How to Interpret a Predictive Model
15:29 • 2min
8
The Different Types of Gradients
17:05 • 4min
9
The Importance of a Token in a Linear Model
20:38 • 2min
10
The Importance of Shapely Values in Text
22:59 • 3min
11
The Importance of Explanation in Interpretability Methods
26:11 • 3min
12
The Pathological Nature of Modeling
29:01 • 2min
13
Influence Functions in Machine Learning
30:41 • 2min
14
Influence Functions in Modeling
32:52 • 3min
15
How to Compute an Influence Function
35:43 • 2min
16
The Importance of Influence Functions in NLP
37:25 • 2min
17
The Importance of Interpretability in Model Design
39:47 • 3min
18
How to Evaluate Explanation Methods
42:47 • 3min
19
How to Evaluate a Model Behavior
46:00 • 4min
20
The Importance of Gradient Based Methods in Modeling
49:41 • 2min
21
The Importance of Manipulating Gradients in Computer Vision
52:06 • 2min
22
The Caveats of Machine Learning
54:21 • 3min