Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
Introduction
00:00 • 3min
Are Large Language Models Going to Make Us Smarter?
02:44 • 2min
Chat GPT
04:29 • 3min
Is It Hard to Tell From Your Own Thoughts?
07:07 • 2min
Is There a Risk of Language Models Generating Text?
09:13 • 3min
Is There a Future in Language Models?
12:28 • 4min
I'll Frozen My Child Too Why Not?
16:18 • 2min
I've Never Thought of Learning Language
18:38 • 4min
Do You Think It's Possible for a Text Only Trained Large Language Model to Learn Backwards Relative to Humans?
22:31 • 2min
Are Language Models Learning Meaning?
24:06 • 3min
Is There an Affordance in Language Modeling?
27:28 • 4min
What's Happening Inside the Neural Network?
31:34 • 2min
I Think We're Going to Find Something Like This in Chat CHBT
34:01 • 4min
How Much Do Humans Predict the Next Word?
37:33 • 4min
Is There a Standard Set of Criteria to Evaluate Models?
41:20 • 4min
Are Humans the Right Benchmark?
45:11 • 2min
Understanding Human Language Models Is a Good Idea
47:07 • 4min
I've Been Very Excited to See Evidence of Syntactic Structure in Large Language Models
51:13 • 5min
Is a Symbol an Emergent Property of Sub-Symbolic Processes?
56:01 • 3min
Are There Manifold Stories to Be Told With Language Models?
58:47 • 2min
The Limitations of Large Language Models
01:01:13 • 3min
The Fundamental Limitations of Neural Networks
01:04:14 • 4min
What Do Linguists Think of Language Models?
01:07:49 • 5min
What Is Language For?
01:12:51 • 4min
Language Is Not for Coming or Isn't for Thought
01:16:23 • 2min
Is There a Language?
01:18:06 • 3min