
Brain Inspired
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Latest episodes

Aug 17, 2022 • 1h 12min
BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models
Check out my short video series about what's missing in AI and Neuroscience.
Support the show to get full episodes, full archive, and join the Discord community.
Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.
Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words.
Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more.
EvLab.Emily's website.Twitter: @ev_fedorenko; @emilymbender.Related papersLanguage and thought are not the same thing: Evidence from neuroimaging and neurological patients. (Fedorenko)The neural architecture of language: Integrative modeling converges on predictive processing. (Fedorenko)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender)
0:00 - Intro
4:35 - Language and cognition
15:38 - Grasping for meaning
21:32 - Are large language models producing language?
23:09 - Next-word prediction in brains and models
32:09 - Interface between language and thought
35:18 - Studying language in nonhuman animals
41:54 - Do we understand language enough?
45:51 - What do language models need?
51:45 - Are LLMs teaching us about language?
54:56 - Is meaning necessary, and does it matter how we learn language?
1:00:04 - Is our biology important for language?
1:04:59 - Future outlook

Aug 5, 2022 • 1h 25min
BI 143 Rodolphe Sepulchre: Mixed Feedback Control
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics.
Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve Marder: Modulation of NetworksBI 119 Henry Yin: The Crisis in Neuroscience
0:00 - Intro
4:38 - Control engineer
9:52 - Control vs. dynamical systems
13:34 - Building vs. understanding
17:38 - Mixed feedback signals
26:00 - Robustness
28:28 - Eve Marder
32:00 - Loneliness
37:35 - Across levels
44:04 - Neuromorphics and neuromodulation
52:15 - Barrier to adopting neuromorphics
54:40 - Deep learning influence
58:04 - Beyond energy efficiency
1:02:02 - Deep learning for neuro
1:14:15 - Role of philosophy
1:16:43 - Doing it right

Jul 26, 2022 • 1h 43min
BI 142 Cameron Buckner: The New DoGMA
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content - how our thoughts connect to the natural external world.
Cameron's Website.Twitter: @cameronjbuckner.Related papersEmpiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.A Forward-Looking Theory of Content.Other sources Cameron mentions:Innateness, AlphaZero, and Artificial Intelligence (Gary Marcus).Radical Empiricism and Machine Learning Research (Judea Pearl).Fodor’s guide to the Humean mind (Tamás Demeter).
0:00 - Intro
4:55 - Interpreting old philosophy
8:26 - AI and philosophy
17:00 - Empiricism vs. rationalism
27:09 - Domain-general faculties
33:10 - Faculty psychology
40:28 - New faculties?
46:11 - Human faculties
51:15 - Cognitive architectures
56:26 - Language
1:01:40 - Beyond dichotomous thinking
1:04:08 - Lower-level faculties
1:10:16 - Animal cognition
1:14:31 - A Forward-Looking Theory of Content

Jul 12, 2022 • 1h 32min
BI 141 Carina Curto: From Structure to Dynamics
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.
Carina's website.The Mathematical Neuroscience Lab.Related papersA major obstacle impeding progress in brain science is the lack of beautiful models.What can topology tells us about the neural code?Predicting neural network dynamics via graphical analysis
0:00 - Intro
4:25 - Background: Physics and math to study brains
20:45 - Beautiful and ugly models
35:40 - Topology
43:14 - Topology in hippocampal navigation
56:04 - Topology vs. dynamical systems theory
59:10 - Combinatorial linear threshold networks
1:25:26 - How much more math do we need to invent?

Jun 30, 2022 • 1h 20min
BI 140 Jeff Schall: Decisions and Eye Movements
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time).
Schall Lab.Twitter: @LabSchall.Related papersLinking Propositions.Strong Inference.On Building a Bridge Between Brain and Behavior.Accumulators, Neurons, and Response Time.
0:00 - Intro
6:51 - Neurophysiology old and new
14:50 - Linking propositions
24:18 - Psychology working with neurophysiology
35:40 - Neuron doctrine, population doctrine
40:28 - Strong Inference and deep learning
46:37 - Model mimicry
51:56 - Scientific fads
57:07 - Current projects
1:06:38 - On leaving academia
1:13:51 - How academia has changed for better and worse

4 snips
Jun 20, 2022 • 1h 20min
BI 139 Marc Howard: Compressed Time and Memory
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.
Theoretical Cognitive Neuroscience Lab.
Twitter: @marcwhoward777.
Related papers:
Memory as perception of the past: Compressed time in mind and brain.
Formal models of memory based on temporally-varying representations.
Cognitive computation using neural representations of time and space in the Laplace domain.
Time as a continuous dimension in natural and artificial networks.
DeepSITH: Efficient learning via decomposition of what and when across time scales.
0:00 - Intro
4:57 - Main idea: Laplace transforms
12:00 - Time cells
20:08 - Laplace, compression, and time cells
25:34 - Everywhere in the brain
29:28 - Episodic memory
35:11 - Randy Gallistel's memory idea
40:37 - Adding Laplace to deep nets
48:04 - Reinforcement learning
1:00:52 - Brad Wyble Q: What gets filtered out?
1:05:38 - Replay and complementary learning systems
1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki
1:15:10 - Obstacles

Jun 6, 2022 • 1h 52min
BI 138 Matthew Larkum: The Dendrite Hypothesis
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.
Larkum Lab.Twitter: @mattlark.Related papersCellular Mechanisms of Conscious Processing.Perirhinal input to neocortical layer 1 controls learning. (bioRxiv link: https://www.biorxiv.org/content/10.1101/713883v1)Are dendrites conceptually useful?Memories off the top of your head.Do Action Potentials Cause Consciousness?Blake Richard's episode discussing back-propagation in the brain (based on Matthew's experiments)
0:00 - Intro
5:31 - Background: Dendrites
23:20 - Cortical neuron bodies vs. branches
25:47 - Theories of cortex
30:49 - Feedforward and feedback hierarchy
37:40 - Dendritic integration hypothesis
44:32 - DIT vs. other consciousness theories
51:30 - Mac Shine Q1
1:04:38 - Are dendrites conceptually useful?
1:09:15 - Insights from implementation level
1:24:44 - How detailed to model?
1:28:15 - Do action potentials cause consciousness?
1:40:33 - Mac Shine Q2

May 27, 2022 • 1h 18min
BI 137 Brian Butterworth: Can Fish Count?
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.
Brian's website: The Mathematical BrainTwitter: @b_butterworthThe book:Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds
0:00 - Intro
3:19 - Why Counting?
5:31 - Dyscalculia
12:06 - Dyslexia
19:12 - Counting
26:37 - Origins of counting vs. language
34:48 - Counting vs. higher math
46:46 - Counting some things and not others
53:33 - How to test counting
1:03:30 - How does the brain count?
1:13:10 - Are numbers real?

19 snips
May 17, 2022 • 1h 34min
BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more.
Michel's websiteAlex's Lab: The Behavior of Organisms Laboratory.Twitter: @behaviOrganisms (Alex)Related papersThe Blind Spot of Neuroscience The Life of BehaviorA Clash of Umwelts Related events:The Future Scientist (a conversation series)
0:00 - Intro
4:32 - The Blind Spot
15:53 - Phenomenology and interpretation
22:51 - Personal stories: appreciating phenomenology
37:42 - Quantum physics example
47:16 - Scientific explanation vs. phenomenological description
59:39 - How can phenomenology and science complement each other?
1:08:22 - Neurophenomenology
1:17:34 - Use of language
1:25:46 - Mutual constraints

May 6, 2022 • 1h 17min
BI 135 Elena Galea: The Stars of the Brain
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control.
Elena's website.Twitter: @elenagalea1Related papersA roadmap to integrate astrocytes into Systems Neuroscience.Elena recommended this paper: Biological feedback control—Respect the loops.
0:00 - Intro
5:23 - The changing story of astrocytes
14:58 - Astrocyte research lags neuroscience
19:45 - Types of astrocytes
23:06 - Astrocytes vs neurons
26:08 - Computational roles of astrocytes
35:45 - Feedback control
43:37 - Energy efficiency
46:25 - Current technology
52:58 - Computational astroscience
1:10:57 - Do names for things matter