

‘Robust effects of working memory demand during naturalistic language comprehension in language-selective cortex’ with Cory Shain
Oct 4, 2021
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Introduction
00:00 • 2min
What Kind of Work Did You Do After You Graduated From Ohio State?
01:37 • 3min
Is That How You Say It?
04:50 • 2min
How Did You Find Your Way Into Neuroscience?
06:25 • 3min
How Did You Get Your PhD? And You're a Postdoc?
08:58 • 2min
Is There an Alternative to Syntactic Analysis in Language Processing?
10:43 • 6min
The Predictability Effects and Language Processing
16:27 • 4min
Surprising All Is Not a Close to Parsing, Right?
20:53 • 1min
The Dominant View of Working Memory in Language Parsing
22:21 • 4min
Language Localizer and Multiple Demand Localizer
25:59 • 4min
Using the Negative of Language Versus Pseudowords to Map Out the MD Network?
30:19 • 2min
The Multiple Domain Network and Working Memory in Language Processing
32:00 • 2min
Are There Controls for High Level Language Processing?
34:23 • 5min
The Dependency Locality Theory Theory
38:54 • 2min
The Dependency of a Penicylicality Theory
41:11 • 4min
How to Deduce Storage Costs From Leftward Parsing
44:57 • 3min
Do the Memory Predictors Improved Model Fit to the Data?
47:29 • 2min
Do You Think the DLT Is Better Than the Other Models?
49:17 • 2min
The Finding of Working Memory for Vectors, Um, Is a Good One?
51:28 • 2min
How Much of the Stimulus Turbine Signal Is Explanable?
53:49 • 2min
Is It Domain Specific Versus Domain General Working Memory?
55:57 • 2min
Is the MD Network the Best Possible Candidate for Domain General Effects?
57:41 • 2min
Did It All Come Out the Way You Thought?
59:19 • 2min
The Message From AdMob: Is There a Memory Effect?
01:01:20 • 3min