
Brain Inspired
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Latest episodes

42 snips
Sep 13, 2023 • 1h 45min
BI 174 Alicia Juarrero: Context Changes Everything
Philosopher Alicia Juarrero discusses the importance of constraints in understanding complex systems like the brain. They explore the concept of context and its impact on human behavior. The conversation touches on emergent properties, culture, and the role of top-down constraints. They also delve into embodied cognition, brain activity, dimensionality, the difference between mind and brain, and the concept of wisdom. Exploring hope, protein folding, and higher levels of organization and constraints conclude the podcast.

Aug 30, 2023 • 1h 36min
BI 173 Justin Wood: Origins of Visual Intelligence
In this podcast, Justin Wood discusses his work comparing the visual cognition of newborn chicks and AI models. He uses controlled-rearing techniques to understand visual intelligence and build systems that emulate biological organisms. They explore topics like object recognition, reverse engineering, collective behavior, and the potential of transformers in cognitive science.

10 snips
Aug 7, 2023 • 1h 31min
BI 172 David Glanzman: Memory All The Way Down
Support the show to get full episodes, full archive, and join the Discord community.
David runs his lab at UCLA where he's also a distinguished professor. David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons. So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it's like studying non-mainstream topic, including challenges trying to get funded for it, and so on.
David's Faculty Page.
Related papers
The central importance of nuclear mechanisms in the storage of memory.
David mentions Arc and virus-like transmission:
The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer.
Structure of an Arc-ane virus-like capsid.
David mentions many of the ideas from the Pushing the Boundaries: Neuroscience, Cognition, and Life Symposium.
Related episodes:
BI 126 Randy Gallistel: Where Is the Engram?
BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Jul 22, 2023 • 1h 25min
BI 171 Mike Frank: Early Language and Cognition
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.
We discuss that, his love for developing open data sets that anyone can use,
The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches
How early language learning in children differs from LLM learning
Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.
Language & Cognition Lab
Twitter: @mcxfrank.
I mentioned Mike's tweet thread about saying LLMs "have" cognitive functions:
Related papers:
Pragmatic language interpretation as probabilistic inference.
Toward a “Standard Model” of Early Language Learning.
The pervasive role of pragmatics in early language.
The Structure of Developmental Variation in Early Childhood.
Relational reasoning and generalization using non-symbolic neural networks.
Unsupervised neural network models of the ventral visual stream.

Jul 11, 2023 • 1h 17min
BI 170 Ali Mohebi: Starting a Research Lab
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future.
Ali's website.
Twitter: @mohebial

Jun 28, 2023 • 1h 42min
BI 169 Andrea Martin: Neural Dynamics and Language
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains.
Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over time.
One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more.
Andrea's website.
Twitter: @andrea_e_martin.
Related papers
A Compositional Neural Architecture for Language
An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions
Neural dynamics differentially encode phrases and sentences during spoken language comprehension
Hierarchical structure in language and action: A formal comparison
Andrea mentions this book: The Geometry of Biological Time.

Jun 2, 2023 • 1h 55min
BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?
Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives.
This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion!
AWARE: Glimpses of Consciousness
Umbrella Films
0:00 - Intro
19:42 - Mechanistic reductionism
45:33 - Changing views during lifetime
53:49 - Did making the film alter your views?
57:49 - ChatGPT
1:04:20 - Materialist assumption
1:11:00 - Science of consciousness
1:20:49 - Transhumanism
1:32:01 - Integrity
1:36:19 - Aesthetics
1:39:50 - Response to the film

May 27, 2023 • 1h 28min
BI 167 Panayiota Poirazi: AI Brains Need Dendrites
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks. In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.
Poirazi Lab
Twitter: @YiotaPoirazi.
Related papers
Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks.
Illuminating dendritic function with computational models.
Introducing the Dendrify framework for incorporating dendrites to spiking neural networks.
Pyramidal Neuron as Two-Layer Neural Network
0:00 - Intro
3:04 - Yiota's background
6:40 - Artificial networks and dendrites
9:24 - Dendrites special sauce?
14:50 - Where are we in understanding dendrite function?
20:29 - Algorithms, plasticity, and brains
29:00 - Functional unit of the brain
42:43 - Engrams
51:03 - Dendrites and nonlinearity
54:51 - Spiking neural networks
56:02 - Best level of biological detail
57:52 - Dendrify
1:05:41 - Experimental work
1:10:58 - Dendrites across species and development
1:16:50 - Career reflection
1:17:57 - Evolution of Yiota's thinking

8 snips
May 9, 2023 • 1h 27min
BI 166 Nick Enfield: Language vs. Reality
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go.
For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!" In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately.
From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship between language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language.
Nick's website
Twitter: @njenfield
Book:
Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists.
Papers:
Linguistic concepts are self-generating choice architectures
0:00 - Intro
4:23 - Is learning about language important?
15:43 - Linguistic Anthropology
28:56 - Language and truth
33:57 - How special is language
46:19 - Choice architecture and framing
48:19 - Language for thinking or communication
52:30 - Agency and language
56:51 - Large language models
1:16:18 - Getting language right
1:20:48 - Social relationships and language

7 snips
Apr 12, 2023 • 1h 39min
BI 165 Jeffrey Bowers: Psychology Gets No Respect
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work.
However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more.
Website
Twitter: @jeffrey_bowers
Related papers:
Deep Problems with Neural Network Models of Human Vision.
Parallel Distributed Processing Theory in the Age of Deep Networks.
Successes and critical failures of neural networks in capturing human-like speech recognition.
0:00 - Intro
3:52 - Testing neural networks
5:35 - Neuro-AI needs psychology
23:36 - Experiments in AI and neuroscience
23:51 - Why build networks like our minds?
44:55 - Vision problem spaces, solution spaces, training data
55:45 - Do we implement algorithms?
1:01:33 - Relational and combinatorial cognition
1:06:17 - Comparing representations in different networks
1:12:31 - Large language models
1:21:10 - Teaching LLMs nonsense languages