Support the show to get full episodes, full archive, and join the Discord community.
Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from Kanaka Rajan, Kendrick Kay, and Patryk Laurent.
0:00 - Intro
4:58 - Cognitive control
7:44 - Rapid Instructed Task Learning and Flexible Hub Theory
15:53 - Patryk Laurent question: free will
26:21 - Kendrick Kay question: fMRI limitations
31:55 - Empirically-estimated neural networks (ENNs)
40:51 - ENNs vs. deep learning
45:30 - Clinical relevance of ENNs
47:32 - Kanaka Rajan question: a proposed collaboration
56:38 - Advantage of modeling multiple regions
1:05:30 - How ENNs work
1:12:48 - How ENNs might benefit artificial intelligence
1:19:04 - The need for causality
1:24:38 - Importance of luck and serendipity