
Brain Inspired
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Latest episodes

8 snips
May 9, 2023 • 1h 27min
BI 166 Nick Enfield: Language vs. Reality
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go.
For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!" In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately.
From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship between language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language.
Nick's website
Twitter: @njenfield
Book:
Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists.
Papers:
Linguistic concepts are self-generating choice architectures
0:00 - Intro
4:23 - Is learning about language important?
15:43 - Linguistic Anthropology
28:56 - Language and truth
33:57 - How special is language
46:19 - Choice architecture and framing
48:19 - Language for thinking or communication
52:30 - Agency and language
56:51 - Large language models
1:16:18 - Getting language right
1:20:48 - Social relationships and language

7 snips
Apr 12, 2023 • 1h 39min
BI 165 Jeffrey Bowers: Psychology Gets No Respect
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work.
However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more.
Website
Twitter: @jeffrey_bowers
Related papers:
Deep Problems with Neural Network Models of Human Vision.
Parallel Distributed Processing Theory in the Age of Deep Networks.
Successes and critical failures of neural networks in capturing human-like speech recognition.
0:00 - Intro
3:52 - Testing neural networks
5:35 - Neuro-AI needs psychology
23:36 - Experiments in AI and neuroscience
23:51 - Why build networks like our minds?
44:55 - Vision problem spaces, solution spaces, training data
55:45 - Do we implement algorithms?
1:01:33 - Relational and combinatorial cognition
1:06:17 - Comparing representations in different networks
1:12:31 - Large language models
1:21:10 - Teaching LLMs nonsense languages

Apr 1, 2023 • 1h 32min
BI 164 Gary Lupyan: How Language Affects Thought
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics.
And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test.
Lupyan Lab.
Twitter: @glupyan.
Related papers:
Hidden Differences in Phenomenal Experience.
Verbal interference paradigms: A systematic review investigating the role of language in cognition.
Gary mentioned Richard Feynman's Ways of Thinking video.
Gary and Andy Clark's Aeon article: Super-cooperators.
0:00 - Intro
2:36 - Words and communication
14:10 - Phenomenal variability
26:24 - Co-operating minds
38:11 - Large language models
40:40 - Neuro-symbolic AI, scale
44:43 - How LLMs have changed Gary's thoughts about language
49:26 - Meaning, grounding, and language
54:26 - Development of language
58:53 - Symbols and emergence
1:03:20 - Language evolution in the LLM era
1:08:05 - Concepts
1:11:17 - How special is language?
1:18:08 - AGI

Mar 20, 2023 • 1h 22min
BI 163 Ellie Pavlick: The Mind of a Language Model
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.
Language Understanding and Representation Lab
Twitter: @Brown_NLP
Related papers
Semantic Structure in Deep Learning.
Pretraining on Interactions for Learning Grounded Affordance Representations.
Mapping Language Models to Grounded Conceptual Spaces.
0:00 - Intro
2:34 - Will LLMs make us dumb?
9:01 - Evolution of language
17:10 - Changing views on language
22:39 - Semantics, grounding, meaning
37:40 - LLMs, humans, and prediction
41:19 - How to evaluate LLMs
51:08 - Structure, semantics, and symbols in models
1:00:08 - Dimensionality
1:02:08 - Limitations of LLMs
1:07:47 - What do linguists think?
1:14:23 - What is language for?

25 snips
Mar 8, 2023 • 1h 23min
BI 162 Earl K. Miller: Thoughts are an Emergent Property
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition.
Recently on BI we've discussed oscillations quite a bit. In episode 153, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument. In episode 160, Ole Jensen discussed his work in humans showing that low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl's work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics.
Miller lab.
Twitter: @MillerLabMIT.
Related papers:
An integrative theory of prefrontal cortex function. Annual Review of Neuroscience.
Working Memory Is Complex and Dynamic, Like Your Thoughts.
Traveling waves in the prefrontal cortex during working memory.
0:00 - Intro
6:22 - Evolution of Earl's thinking
14:58 - Role of the prefrontal cortex
25:21 - Spatial computing
32:51 - Homunculus problem
35:34 - Self
37:40 - Dimensionality and thought
46:13 - Reductionism
47:38 - Working memory and capacity
1:01:45 - Capacity as a principle
1:05:44 - Silent synapses
1:10:16 - Subspaces in dynamics

Feb 24, 2023 • 1h 35min
BI 161 Hugo Spiers: Navigation and Spatial Cognition
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Hugo Spiers runs the Spiers Lab at University College London. In general Hugo is interested in understanding spatial cognition, like navigation, in relation to other processes like planning and goal-related behavior, and how brain areas like the hippocampus and prefrontal cortex coordinate these cognitive functions. So, in this episode, we discuss a range of his research and thoughts around those topics. You may have heard about the studies he's been involved with for years, regarding London taxi drivers and how their hippocampus changes as a result of their grueling efforts to memorize how to best navigate London. We talk about that, we discuss the concept of a schema, which is roughly an abstracted form of knowledge that helps you know how to behave in different environments. Probably the most common example is that we all have a schema for eating at a restaurant, independent of which restaurant we visit, we know about servers, and menus, and so on. Hugo is interested in spatial schemas, for things like navigating a new city you haven't visited. Hugo describes his work using reinforcement learning methods to compare how humans and animals solve navigation tasks. And finally we talk about the video game Hugo has been using to collect vast amount of data related to navigation, to answer questions like how our navigation ability changes over our lifetimes, the different factors that seem to matter more for our navigation skills, and so on.
Spiers Lab.
Twitter: @hugospiers.
Related papers
Predictive maps in rats and humans for spatial navigation.
From cognitive maps to spatial schemas.
London taxi drivers: A review of neurocognitive studies and an exploration of how they build their cognitive map of London.
Explaining World-Wide Variation in Navigation Ability from Millions of People: Citizen Science Project Sea Hero Quest.

13 snips
Feb 7, 2023 • 1h 29min
BI 160 Ole Jensen: Rhythms of Cognition
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we're performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole's work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren't needed during a given behavior. And therefore by disrupting everything that's not needed, resources are allocated to the brain areas that are needed. We discuss his work in the vein on attention - you may remember the episode with Carolyn Dicey-Jennings, and her ideas about how findings like Ole's are evidence we all have selves. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we're about to look at before we move our eyes, and more broadly we discuss the role of oscillations in cognition in general, and of course what this might mean for developing better artificial intelligence.
The Neuronal Oscillations Group.
Twitter: @neuosc.
Related papers
Shaping functional architecture by oscillatory alpha activity: gating by inhibition
FEF-Controlled Alpha Delay Activity Precedes Stimulus-Induced Gamma-Band Activity in Visual Cortex
The theta-gamma neural code
A pipelining mechanism supporting previewing during visual exploration and reading.
Specific lexico-semantic predictions are associated with unique spatial and temporal patterns of neural activity.
0:00 - Intro
2:58 - Oscillations import over the years
5:51 - Oscillations big picture
17:62 - Oscillations vs. traveling waves
22:00 - Oscillations and algorithms
28:53 - Alpha oscillations and working memory
44:46 - Alpha as the controller
48:55 - Frequency tagging
52:49 - Timing of attention
57:41 - Pipelining neural processing
1:03:38 - Previewing during reading
1:15:50 - Previewing, prediction, and large language models
1:24:27 - Dyslexia

Jan 26, 2023 • 1h 29min
BI 159 Chris Summerfield: Natural General Intelligence
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he's a research scientist at Deepmind. You may remember him from episode 95 with Sam Gershman, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, Natural General Intelligence: How understanding the brain can help us build AI. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples.
Human Information Processing Lab.
Twitter: @summerfieldlab.
Book: Natural General Intelligence: How understanding the brain can help us build AI.
Other books mentioned:
Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal
The Mind is Flat by Nick Chater.
0:00 - Intro
2:20 - Natural General Intelligence
8:05 - AI and Neuro interaction
21:42 - How to build AI
25:54 - Umwelts and affordances
32:07 - Different kind of intelligence
39:16 - Ecological validity and AI
48:30 - Is reward enough?
1:05:14 - Beyond brains
1:15:10 - Large language models and brains

Jan 16, 2023 • 1h 35min
BI 158 Paul Rosenbloom: Cognitive Architectures
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul's case the human mind. And SOAR was aimed at generating general intelligence. He doesn't work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing itself should be considered a great scientific domain. That's in his book On Computing: The Fourth Great Scientific Domain.
He also helped develop the Common Model of Cognition, which isn't a cognitive architecture itself, but instead a theoretical model meant to generate consensus regarding the minimal components for a human-like mind. The idea is roughly to create a shared language and framework among cognitive architecture researchers, so the field can , so that whatever cognitive architecture you work on, you have a basis to compare it to, and can communicate effectively among your peers.
All of what I just said, and much of what we discuss, can be found in Paul's memoir, In Search of Insight: My Life as an Architectural Explorer.
Paul's website.
Related papers
Working memoir: In Search of Insight: My Life as an Architectural Explorer.
Book: On Computing: The Fourth Great Scientific Domain.
A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics.
Analysis of the human connectome data supports the notion of a “Common Model of Cognition” for human and human-like intelligence across domains.
Common Model of Cognition Bulletin.
0:00 - Intro
3:26 - A career of exploration
7:00 - Alan Newell
14:47 - Relational model and dichotomic maps
24:22 - Cognitive architectures
28:31 - SOAR cognitive architecture
41:14 - Sigma cognitive architecture
43:58 - SOAR vs. Sigma
53:06 - Cognitive architecture community
55:31 - Common model of cognition
1:11:13 - What's missing from the common model
1:17:48 - Brains vs. cognitive architectures
1:21:22 - Mapping the common model onto the brain
1:24:50 - Deep learning
1:30:23 - AGI

Jan 2, 2023 • 1h 21min
BI 157 Sarah Robins: Philosophy of Memory
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Sarah Robins is a philosopher at the University of Kansas, one a growing handful of philosophers specializing in memory. Much of her work focuses on memory traces, which is roughly the idea that somehow our memories leave a trace in our minds. We discuss memory traces themselves and how they relate to the engram (see BI 126 Randy Gallistel: Where Is the Engram?, and BI 127 Tomás Ryan: Memory, Instinct, and Forgetting).
Psychology has divided memories into many categories - the taxonomy of memory. Sarah and I discuss how memory traces may cross-cut those categories, suggesting we may need to re-think our current ontology and taxonomy of memory.
We discuss a couple challenges to the idea of a stable memory trace in the brain. Neural dynamics is the notion that all our molecules and synapses are constantly changing and being recycled. Memory consolidation refers to the process of transferring our memory traces from an early unstable version to a more stable long-term version in a different part of the brain. Sarah thinks neither challenge poses a real threat to the idea
We also discuss the impact of optogenetics on the philosophy and neuroscience and memory, the debate about whether memory and imagination are essentially the same thing, whether memory's function is future oriented, and whether we want to build AI with our often faulty human-like memory or with perfect memory.
Sarah's website.
Twitter: @SarahKRobins.
Related papers:
Her Memory chapter, with Felipe de Brigard, in the book Mind, Cognition, and Neuroscience: A Philosophical Introduction.
Memory and Optogenetic Intervention: Separating the engram from the ecphory.
Stable Engrams and Neural Dynamics.
0:00 - Intro
4:18 - Philosophy of memory
5:10 - Making a move
6:55 - State of philosophy of memory
11:19 - Memory traces or the engram
20:44 - Taxonomy of memory
25:50 - Cognitive ontologies, neuroscience, and psychology
29:39 - Optogenetics
33:48 - Memory traces vs. neural dynamics and consolidation
40:32 - What is the boundary of a memory?
43:00 - Process philosophy and memory
45:07 - Memory vs. imagination
49:40 - Constructivist view of memory and imagination
54:05 - Is memory for the future?
58:00 - Memory errors and intelligence
1:00:42 - Memory and AI
1:06:20 - Creativity and memory errors