Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Introduction
00:00 • 4min
The Different Motivations for Predicting Things for the Right Reason
03:36 • 3min
The Trade-Off Between Interpretability and Non-Linear Models
06:30 • 3min
The Problems With Linear Models
09:28 • 2min
The Challenges of Interpretability in Machine Learning
11:36 • 2min
How to Approach Interpretability
13:37 • 2min
How to Interpret a Predictive Model
15:29 • 2min
The Different Types of Gradients
17:05 • 4min
The Importance of a Token in a Linear Model
20:38 • 2min
The Importance of Shapely Values in Text
22:59 • 3min
The Importance of Explanation in Interpretability Methods
26:11 • 3min
The Pathological Nature of Modeling
29:01 • 2min
Influence Functions in Machine Learning
30:41 • 2min
Influence Functions in Modeling
32:52 • 3min
How to Compute an Influence Function
35:43 • 2min
The Importance of Influence Functions in NLP
37:25 • 2min
The Importance of Interpretability in Model Design
39:47 • 3min
How to Evaluate Explanation Methods
42:47 • 3min
How to Evaluate a Model Behavior
46:00 • 4min
The Importance of Gradient Based Methods in Modeling
49:41 • 2min
The Importance of Manipulating Gradients in Computer Vision
52:06 • 2min
The Caveats of Machine Learning
54:21 • 3min