Brain Inspired

BI 163 Ellie Pavlick: The Mind of a Language Model

Mar 20, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 3min
2
Are Large Language Models Going to Make Us Smarter?
02:44 • 2min
3
Chat GPT
04:29 • 3min
4
Is It Hard to Tell From Your Own Thoughts?
07:07 • 2min
5
Is There a Risk of Language Models Generating Text?
09:13 • 3min
6
Is There a Future in Language Models?
12:28 • 4min
7
I'll Frozen My Child Too Why Not?
16:18 • 2min
8
I've Never Thought of Learning Language
18:38 • 4min
9
Do You Think It's Possible for a Text Only Trained Large Language Model to Learn Backwards Relative to Humans?
22:31 • 2min
10
Are Language Models Learning Meaning?
24:06 • 3min
11
Is There an Affordance in Language Modeling?
27:28 • 4min
12
What's Happening Inside the Neural Network?
31:34 • 2min
13
I Think We're Going to Find Something Like This in Chat CHBT
34:01 • 4min
14
How Much Do Humans Predict the Next Word?
37:33 • 4min
15
Is There a Standard Set of Criteria to Evaluate Models?
41:20 • 4min
16
Are Humans the Right Benchmark?
45:11 • 2min
17
Understanding Human Language Models Is a Good Idea
47:07 • 4min
18
I've Been Very Excited to See Evidence of Syntactic Structure in Large Language Models
51:13 • 5min
19
Is a Symbol an Emergent Property of Sub-Symbolic Processes?
56:01 • 3min
20
Are There Manifold Stories to Be Told With Language Models?
58:47 • 2min
21
The Limitations of Large Language Models
01:01:13 • 3min
22
The Fundamental Limitations of Neural Networks
01:04:14 • 4min
23
What Do Linguists Think of Language Models?
01:07:49 • 5min
24
What Is Language For?
01:12:51 • 4min
25
Language Is Not for Coming or Isn't for Thought
01:16:23 • 2min
26
Is There a Language?
01:18:06 • 3min