
Brain Inspired
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Latest episodes

Jan 5, 2022 • 1h 39min
BI 124 Peter Robin Hiesinger: The Self-Assembling Brain
Support the show to get full episodes, full archive, and join the Discord community.
Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the "start". This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won't be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.
Hiesinger Neurogenetics LaboratoryTwitter: @HiesingerLab.Book: The Self-Assembling Brain: How Neural Networks Grow Smarter
0:00 - Intro
3:01 - The Self-Assembling Brain
21:14 - Including growth in networks
27:52 - Information unfolding and algorithmic growth
31:27 - Cellular automata
40:43 - Learning as a continuum of growth
45:01 - Robustness, autonomous agents
49:11 - Metabolism vs. connectivity
58:00 - Feedback at all levels
1:05:32 - Generality vs. specificity
1:10:36 - Whole brain emulation
1:20:38 - Changing view of intelligence
1:26:34 - Popular and wrong vs. unknown and right

19 snips
Dec 26, 2021 • 1h 19min
BI 123 Irina Rish: Continual Learning
Support the show to get full episodes, full archive, and join the Discord community.
Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.
Irina's website.Twitter: @irinarishRelated papers:Beyond Backprop: Online Alternating Minimization with Auxiliary Variables.Towards Continual Reinforcement Learning: A Review and Perspectives.Lifelong learning video tutorial: DLRL Summer School 2021 - Lifelong Learning - Irina Rish.
0:00 - Intro
3:26 - AI for Neuro, Neuro for AI
14:59 - Utility of philosophy
20:51 - Artificial general intelligence
24:34 - Back-propagation alternatives
35:10 - Inductive bias vs. scaling generic architectures
45:51 - Continual learning
59:54 - Neuro-inspired continual learning
1:06:57 - Learning trajectories

Dec 12, 2021 • 1h 33min
BI 122 Kohitij Kar: Visual Intelligence
Support the show to get full episodes and join the Discord community.
Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in James Dicarlo's lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition.
VISUAL INTELLIGENCE AND TECHNOLOGICAL ADVANCES LABTwitter: @KohitijKar.Related papersEvidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior.Neural population control via deep image synthesis.BI 075 Jim DiCarlo: Reverse Engineering Vision
0:00 - Intro
3:49 - Background
13:51 - Where are we in understanding vision?
19:46 - Benchmarks
21:21 - Falsifying models
23:19 - Modeling vs. experiment speed
29:26 - Simple vs complex models
35:34 - Dorsal visual stream and deep learning
44:10 - Modularity and brain area roles
50:58 - Chemogenetic perturbation, DREADDs
57:10 - Future lab vision, clinical applications
1:03:55 - Controlling visual neurons via image synthesis
1:12:14 - Is it enough to study nonhuman animals?
1:18:55 - Neuro/AI intersection
1:26:54 - What is intelligence?

12 snips
Dec 2, 2021 • 1h 43min
BI 121 Mac Shine: Systems Neurobiology
Support the show to get full episodes, full archive, and join the Discord community.
Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence.
Shine LabTwitter: @jmacshineRelated papersThe thalamus integrates the macrosystems of the brain to facilitate complex, adaptive brain network dynamics.Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics.
0:00 - Intro
6:32 - Background
10:41 - Holistic approach
18:19 - Importance of thalamus
35:19 - Thalamus circuitry
40:30 - Cerebellum
46:15 - Predictive processing
49:32 - Brain as dynamical attractor landscape
56:48 - System 1 and system 2
1:02:38 - How to think about the thalamus
1:06:45 - Causality in complex systems
1:11:09 - Clinical applications
1:15:02 - Ascending arousal system and neuromodulators
1:27:48 - Implications for AI
1:33:40 - Career serendipity
1:35:12 - Advice

Nov 21, 2021 • 1h 40min
BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories
Support the show to get full episodes, full archive, and join the Discord community.
James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior.
James' Janelia page.Weinan's Janelia page.Andrew's website.Twitter: Andrew: @SaxeLabWeinan: @sunw37Paper we discuss:Organizing memories for generalization in complementary learning systems.Andrew's previous episode: BI 052 Andrew Saxe: Deep Learning Theory
0:00 - Intro
3:57 - Guest Intros
15:04 - Organizing memories for generalization
26:48 - Teacher, student, and notebook models
30:51 - Shallow linear networks
33:17 - How to optimize generalization
47:05 - Replay as a generalization regulator
54:57 - Whole greater than sum of its parts
1:05:37 - Unpredictability
1:10:41 - Heuristics
1:13:52 - Theoretical neuroscience for AI
1:29:42 - Current personal thinking

Nov 11, 2021 • 1h 7min
BI 119 Henry Yin: The Crisis in Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied... by the experimenter.
Yin lab at Duke.Twitter: @HenryYin19.Related papersThe Crisis in Neuroscience.Restoring Purpose in Behavior.Achieving natural behavior in a robot using neurally inspired hierarchical perceptual control.
0:00 - Intro
5:40 - Kuhnian crises
9:32 - Control theory and cybernetics
17:23 - How much of brain is control system?
20:33 - Higher order control representation
23:18 - Prediction and control theory
27:36 - The way forward
31:52 - Compatibility with mental representation
38:29 - Teleology
45:53 - The right number of subjects
51:30 - Continuous measurement
57:06 - Artificial intelligence and control theory

15 snips
Nov 1, 2021 • 1h 36min
BI 118 Johannes Jäger: Beyond Networks
Support the show to get full episodes, full archive, and join the Discord community.
Johannes (Yogi) is a freelance philosopher, researcher & educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell.
Yogi's website and blog: Untethered in the Platonic Realm.Twitter: @yoginho.His youtube course: Beyond Networks: The Evolution of Living Systems.Kevin Mitchell's previous episode: BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness.
0:00 - Intro
4:10 - Yogi's background
11:00 - Beyond Networks - limits of dynamical systems models
16:53 - Kevin Mitchell question
20:12 - Process metaphysics
26:13 - Agency in evolution
40:37 - Agent-environment interaction, open-endedness
45:30 - AI and agency
55:40 - Life and intelligence
59:08 - Deep learning and neuroscience
1:03:21 - Mental autonomy
1:06:10 - William Wimsatt's biopsychological thicket
1:11:23 - Limtiations of mechanistic dynamic explanation
1:18:53 - Synthesis versus multi-perspectivism
1:30:31 - Specialization versus generalization

12 snips
Oct 19, 2021 • 1h 32min
BI 117 Anil Seth: Being You
Support the show to get full episodes, full archive, and join the Discord community.
Anil and I discuss a range of topics from his book, BEING YOU A New Science of Consciousness. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the "real problem" of consciousness. You know the "hard problem", which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil's "real problem" aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science.
Anil's account of perceptual consciousness, like seeing red, is that it's rooted in predicting our incoming sensory data. His account of our sense of self, is that it's rooted in predicting our bodily states to control them.
We talk about that and a lot of other topics from the book, like consciousness as "controlled hallucinations", free will, psychedelics, complexity and emergence, and the relation between life, intelligence, and consciousness. Plus, Anil answers a handful of questions from Megan Peters and Steve Fleming, both previous brain inspired guests.
Anil's website.Twitter: @anilkseth.Anil's book: BEING YOU A New Science of Consciousness.Megan's previous episode:BI 073 Megan Peters: Consciousness and MetacognitionSteve's previous episodesBI 099 Hakwan Lau and Steve Fleming: Neuro-AI ConsciousnessBI 107 Steve Fleming: Know Thyself
0:00 - Intro
6:32 - Megan Peters Q: Communicating Consciousness
15:58 - Human vs. animal consciousness
19:12 - BEING YOU A New Science of Consciousness
20:55 - Megan Peters Q: Will the hard problem go away?
30:55 - Steve Fleming Q: Contents of consciousness
41:01 - Megan Peters Q: Phenomenal character vs. content
43:46 - Megan Peters Q: Lempels of complexity
52:00 - Complex systems and emergence
55:53 - Psychedelics
1:06:04 - Free will
1:19:10 - Consciousness vs. life vs. intelligence

Oct 12, 2021 • 1h 31min
BI 116 Michael W. Cole: Empirical Neural Networks
Support the show to get full episodes, full archive, and join the Discord community.
Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from Kanaka Rajan, Kendrick Kay, and Patryk Laurent.
The Cole Neurocognition lab.Twitter: @TheColeLab.Related papersDiscovering the Computational Relevance of Brain Network Organization.Constructing neural network models from brain data reveals representational transformation underlying adaptive behavior.Kendrick Kay's previous episode: BI 026 Kendrick Kay: A Model By Any Other Name.Kanaka Rajan's previous episode: BI 054 Kanaka Rajan: How Do We Switch Behaviors?
0:00 - Intro
4:58 - Cognitive control
7:44 - Rapid Instructed Task Learning and Flexible Hub Theory
15:53 - Patryk Laurent question: free will
26:21 - Kendrick Kay question: fMRI limitations
31:55 - Empirically-estimated neural networks (ENNs)
40:51 - ENNs vs. deep learning
45:30 - Clinical relevance of ENNs
47:32 - Kanaka Rajan question: a proposed collaboration
56:38 - Advantage of modeling multiple regions
1:05:30 - How ENNs work
1:12:48 - How ENNs might benefit artificial intelligence
1:19:04 - The need for causality
1:24:38 - Importance of luck and serendipity