Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
Introduction
00:00 • 3min
AlexNet
02:41 • 2min
Alot's Net: A PhD in Interpretability
05:02 • 2min
Inherently Interpretable Models Are One Way Communication
06:38 • 2min
Human Experiments - A Toy Showcase of ML Interpretability
08:21 • 2min
Machine Learning - The Golden Standard
10:12 • 2min
Towards a Rigid Science of Interpretability
11:56 • 2min
The Issues That I've Seen at the Time
13:34 • 2min
The Interpretability Field Is Taking Similar Path
15:05 • 4min
Is There a Confirmation Bias in Machine Learning?
18:36 • 2min
Sanity Check for a Sanity and C Map Paper
20:28 • 3min
Machine Learning - What's Your Worst Fear?
23:56 • 2min
Is There a Language Between Humans and Machines?
25:36 • 2min
The Relationship Between Humans and Machines
27:39 • 3min
Cross Training and Mental Model
30:29 • 2min
Is There a Shift in Interpretability?
32:32 • 2min
Adaptation and Deficiency in the Medical Field
34:19 • 2min
TCAV Methods - Concept Based Explanation
35:51 • 3min
The Power of T-Cab and the Directional Derivative
39:01 • 2min
T-Cab - Concept Based Explanation
41:21 • 2min
How Can We Use AI to Help People Be More Creative?
43:08 • 2min
What's Your Concept of Calmness?
45:21 • 2min
T-Cav - Can We Discover Concepts?
47:02 • 3min
How to Improve the Design of a Neural Network?
49:45 • 2min
How Different Architectures Influence Different Concepts?
52:10 • 3min
Alpha Zero - Is There Even an Overlap Between Alpha 0 and How Humans Play Chess?
55:30 • 2min
Alpha Zero Picks Up Human Concepts Like Material Unbalance
57:03 • 2min
How Do Human Chess Players Learn Differently?
59:02 • 2min
How Can the Fuzzero Surprise Us in a Way That's Productive?
01:00:55 • 3min
The Importance of Having Experts in the Room in Machine Learning
01:03:44 • 3min
Is Your Work Impacting the World?
01:06:35 • 2min
Don't Change Who You Are
01:08:49 • 3min