Yannic Kilcher Videos (Audio Only)

ROME: Locating and Editing Factual Associations in GPT (Paper Explained & Author Interview)

Nov 23, 2022
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 4min
2
Interpretability and the Practical Site
03:56 • 2min
3
How Does Cassanity Tracing Work?
05:34 • 4min
4
Is This a Causal Tracing Problem?
10:02 • 3min
5
Using Causal Tracing to Determine the Location of the Space Narrow
12:47 • 4min
6
The MLPs, the MLP Layers, Are Really Simple.
16:58 • 5min
7
What if We Don't Let MLP Modules Read Their Input?
21:35 • 2min
8
The Secret Sauce Is Not Compute
23:40 • 5min
9
How Can We Modify a Single Layer Neural Network?
28:14 • 6min
10
The New Method of Learning New Factors Is a Good Idea
33:58 • 2min
11
What Does It Mean to Know Something?
35:49 • 2min
12
Using a Zero-Shot Relational Extraction in Model Editing
37:48 • 4min
13
The Difference Between Specificity and Reliability
41:57 • 2min
14
The Secrets of Knowledge Based Modeling
44:08 • 5min
15
How to Decode the Vector, the v Space?
48:39 • 2min
16
How Do You Choose the MLP?
50:52 • 2min
17
Scaling a Distributed Network in GPT-2 XL
52:38 • 3min
18
Is There a Difference Between the Foot Forward Layer and the MLP Layer?
55:29 • 3min
19
Rank One Update in a Matrix Matches Up Pretty Well
58:30 • 3min
20
How to Crack Open Machine Learning Models?
01:01:40 • 3min