

Brain Inspired
Paul Middlebrooks
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Episodes
Mentioned books

Jul 12, 2022 • 1h 32min
BI 141 Carina Curto: From Structure to Dynamics
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.
Carina's website.The Mathematical Neuroscience Lab.Related papersA major obstacle impeding progress in brain science is the lack of beautiful models.What can topology tells us about the neural code?Predicting neural network dynamics via graphical analysis
0:00 - Intro
4:25 - Background: Physics and math to study brains
20:45 - Beautiful and ugly models
35:40 - Topology
43:14 - Topology in hippocampal navigation
56:04 - Topology vs. dynamical systems theory
59:10 - Combinatorial linear threshold networks
1:25:26 - How much more math do we need to invent?

Jun 30, 2022 • 1h 20min
BI 140 Jeff Schall: Decisions and Eye Movements
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time).
Schall Lab.Twitter: @LabSchall.Related papersLinking Propositions.Strong Inference.On Building a Bridge Between Brain and Behavior.Accumulators, Neurons, and Response Time.
0:00 - Intro
6:51 - Neurophysiology old and new
14:50 - Linking propositions
24:18 - Psychology working with neurophysiology
35:40 - Neuron doctrine, population doctrine
40:28 - Strong Inference and deep learning
46:37 - Model mimicry
51:56 - Scientific fads
57:07 - Current projects
1:06:38 - On leaving academia
1:13:51 - How academia has changed for better and worse

4 snips
Jun 20, 2022 • 1h 20min
BI 139 Marc Howard: Compressed Time and Memory
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.
Theoretical Cognitive Neuroscience Lab.
Twitter: @marcwhoward777.
Related papers:
Memory as perception of the past: Compressed time in mind and brain.
Formal models of memory based on temporally-varying representations.
Cognitive computation using neural representations of time and space in the Laplace domain.
Time as a continuous dimension in natural and artificial networks.
DeepSITH: Efficient learning via decomposition of what and when across time scales.
0:00 - Intro
4:57 - Main idea: Laplace transforms
12:00 - Time cells
20:08 - Laplace, compression, and time cells
25:34 - Everywhere in the brain
29:28 - Episodic memory
35:11 - Randy Gallistel's memory idea
40:37 - Adding Laplace to deep nets
48:04 - Reinforcement learning
1:00:52 - Brad Wyble Q: What gets filtered out?
1:05:38 - Replay and complementary learning systems
1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki
1:15:10 - Obstacles

Jun 6, 2022 • 1h 52min
BI 138 Matthew Larkum: The Dendrite Hypothesis
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.
Larkum Lab.Twitter: @mattlark.Related papersCellular Mechanisms of Conscious Processing.Perirhinal input to neocortical layer 1 controls learning. (bioRxiv link: https://www.biorxiv.org/content/10.1101/713883v1)Are dendrites conceptually useful?Memories off the top of your head.Do Action Potentials Cause Consciousness?Blake Richard's episode discussing back-propagation in the brain (based on Matthew's experiments)
0:00 - Intro
5:31 - Background: Dendrites
23:20 - Cortical neuron bodies vs. branches
25:47 - Theories of cortex
30:49 - Feedforward and feedback hierarchy
37:40 - Dendritic integration hypothesis
44:32 - DIT vs. other consciousness theories
51:30 - Mac Shine Q1
1:04:38 - Are dendrites conceptually useful?
1:09:15 - Insights from implementation level
1:24:44 - How detailed to model?
1:28:15 - Do action potentials cause consciousness?
1:40:33 - Mac Shine Q2

May 27, 2022 • 1h 18min
BI 137 Brian Butterworth: Can Fish Count?
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.
Brian's website: The Mathematical BrainTwitter: @b_butterworthThe book:Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds
0:00 - Intro
3:19 - Why Counting?
5:31 - Dyscalculia
12:06 - Dyslexia
19:12 - Counting
26:37 - Origins of counting vs. language
34:48 - Counting vs. higher math
46:46 - Counting some things and not others
53:33 - How to test counting
1:03:30 - How does the brain count?
1:13:10 - Are numbers real?

19 snips
May 17, 2022 • 1h 34min
BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more.
Michel's websiteAlex's Lab: The Behavior of Organisms Laboratory.Twitter: @behaviOrganisms (Alex)Related papersThe Blind Spot of Neuroscience The Life of BehaviorA Clash of Umwelts Related events:The Future Scientist (a conversation series)
0:00 - Intro
4:32 - The Blind Spot
15:53 - Phenomenology and interpretation
22:51 - Personal stories: appreciating phenomenology
37:42 - Quantum physics example
47:16 - Scientific explanation vs. phenomenological description
59:39 - How can phenomenology and science complement each other?
1:08:22 - Neurophenomenology
1:17:34 - Use of language
1:25:46 - Mutual constraints

May 6, 2022 • 1h 17min
BI 135 Elena Galea: The Stars of the Brain
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control.
Elena's website.Twitter: @elenagalea1Related papersA roadmap to integrate astrocytes into Systems Neuroscience.Elena recommended this paper: Biological feedback control—Respect the loops.
0:00 - Intro
5:23 - The changing story of astrocytes
14:58 - Astrocyte research lags neuroscience
19:45 - Types of astrocytes
23:06 - Astrocytes vs neurons
26:08 - Computational roles of astrocytes
35:45 - Feedback control
43:37 - Energy efficiency
46:25 - Current technology
52:58 - Computational astroscience
1:10:57 - Do names for things matter

Apr 27, 2022 • 1h 26min
BI 134 Mandyam Srinivasan: Bee Flight and Cognition
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.
Srini's Website.Related papersVision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics.
0:00 - Intro
3:34 - Background
8:20 - Bee experiments
14:30 - Bee flight and navigation
28:05 - Landing
33:06 - Umwelt and perception
37:26 - Bee-inspired aerial robotics
49:10 - Motion camouflage
51:52 - Cognition in bees
1:03:10 - Small vs. big brains
1:06:42 - Pain in bees
1:12:50 - Subjective experience
1:15:25 - Deep learning
1:23:00 - Path forward

4 snips
Apr 15, 2022 • 1h 29min
BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning.
Ken's Cognitive Neuroscience Laboratory.Twitter: @kap101.The Lucid Dreaming App.Related papersMemory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better.Does memory reactivation during sleep support generalization at the cost of memory specifics?Real-time dialogue between experimenters and dreamers during REM sleep.
0:00 - Intro
2:48 - Background and types of memory
14:44 -Consciousness and memory
23:32 - Phases and sleep and wakefulness
28:19 - Sleep, memory, and learning
33:50 - Targeted memory reactivation
48:34 - Problem solving during sleep
51:50 - 2-way communication with lucid dreamers
1:01:43 - Confounds to the paradigm
1:04:50 - Limitations and future studies
1:09:35 - Lucid dreaming app
1:13:47 - How sleep can inform AI
1:20:18 - Advice for students

Apr 3, 2022 • 1h 17min
BI 132 Ila Fiete: A Grid Scaffold for Memory
Announcement:
I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here.
Support the show to get full episodes, full archive, and join the Discord community.
Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.
The Fiete Lab.Related papersA structured scaffold underlies activity in the hippocampus.Attractor and integrator networks in the brain.
0:00 - Intro
3:36 - "Neurophysicist"
9:30 - Bottom-up vs. top-down
15:57 - Tool scavenging
18:21 - Cognitive maps and hippocampus
22:40 - Hopfield networks
27:56 - Internal scaffold
38:42 - Place cells
43:44 - Grid cells
54:22 - Grid cells encoding place cells
59:39 - Scaffold model: stacked hopfield networks
1:05:39 - Attractor landscapes
1:09:22 - Landscapes across scales
1:12:27 - Dimensionality of landscapes