

Brain Inspired
Paul Middlebrooks
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Episodes
Mentioned books

33 snips
Oct 30, 2022 • 1h 31min
BI 151 Steve Byrnes: Brain-like AGI Safety
Support the show to get full episodes, full archive, and join the Discord community.
Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.
Steve's website.Twitter: @steve47285Intro to Brain-Like-AGI Safety.

Oct 15, 2022 • 1h 38min
BI 150 Dan Nicholson: Machines, Organisms, Processes
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the "machine conception of the organism" is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more.
Dan's website. Google Scholar.Twitter: @NicholsonHPBioBookEverything Flows: Towards a Processual Philosophy of Biology.Related papersIs the Cell Really a Machine?The Machine Conception of the Organism in Development and Evolution: A Critical Analysis.On Being the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology.Related episode: BI 118 Johannes Jäger: Beyond Networks.
0:00 - Intro
2:49 - Philosophy and science
16:37 - Role of history
23:28 - What Is Life? And interaction with James Watson
38:37 - Arguments against the machine conception of organisms
49:08 - Organisms as streams (processes)
57:52 - Process philosophy
1:08:59 - Alfred North Whitehead
1:12:45 - Process and consciousness
1:22:16 - Artificial intelligence and process
1:31:47 - Language and symbols and processes

7 snips
Oct 5, 2022 • 1h 34min
BI 149 William B. Miller: Cell Intelligence
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells - our microbiome - that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the "era of the cell" in science, how the future of medicine will harness the intelligence of cells and their cooperative nature, and much more.
William's website.Twitter: @BillMillerMD.Book: Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions.
0:00 - Intro
3:43 - Bioverse
7:29 - Bill's cell appreciation origins
17:03 - Microbiomes
27:01 - Complexity of microbiomes and the "Era of the cell"
46:00 - Robustness
55:05 - Cell vs. human intelligence
1:10:08 - Artificial intelligence
1:21:01 - Neuro-AI
1:25:53 - Hard problem of consciousness

Sep 25, 2022 • 1h 29min
BI 148 Gaute Einevoll: Brain Simulations
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on Carina Curto's "beautiful vs ugly models", and his reaction to Noah Hutton's In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).
Gaute's website.Twitter: @GauteEinevoll.Related papers:The Scientific Case for Brain Simulations.Brain signal predictions from multi-scale networks using a linearized framework.Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortexLFPy: a Python module for calculation of extracellular potentials from multicompartment neuron models.Gaute's Sense and Science podcast.
0:00 - Intro
3:25 - Beautiful and messy models
6:34 - In Silico
9:47 - Goals of human brain project
15:50 - Brain simulation approach
21:35 - Degeneracy in parameters
26:24 - Abstract principles from simulations
32:58 - Models as tools
35:34 - Predicting brain signals
41:45 - LFPs closer to average
53:57 - Plasticity in simulations
56:53 - How detailed should we model neurons?
59:09 - Lessons from predicting signals
1:06:07 - Scaling up
1:10:54 - Simulation as a tool
1:12:35 - Oscillations
1:16:24 - Manifolds and simulations
1:20:22 - Modeling cortex like Hodgkin and Huxley

Sep 13, 2022 • 1h 37min
BI 147 Noah Hutton: In Silico
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry's massively funded projects - the Blue Brain Project and the Human Brain Project.
In Silico website.Rent or buy In Silico.Noah's website.Twitter: @noah_hutton.
0:00 - Intro
3:36 - Release and premier
7:37 - Noah's background
9:52 - Origins of In Silico
19:39 - Recurring visits
22:13 - Including the critics
25:22 - Markram's shifting outlook and salesmanship
35:43 - Promises and delivery
41:28 - Computer and brain terms interchange
49:22 - Progress vs. illusion of progress
52:19 - Close to quitting
58:01 - Salesmanship vs bad at estimating timelines
1:02:12 - Brain simulation science
1:11:19 - AGI
1:14:48 - Brain simulation vs. neuro-AI
1:21:03 - Opinion on TED talks
1:25:16 - Hero worship
1:29:03 - Feedback on In Silico

4 snips
Sep 7, 2022 • 1h 23min
BI 146 Lauren Ross: Causal and Non-Causal Explanation
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame's Woodward's interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim's lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.
Lauren's website.Twitter: @ProfLaurenRossRelated papersA call for more clarity around causality in neuroscience.The explanatory nature of constraints: Law-based, mathematical, and causal.Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters.Distinguishing topological and causal explanation.Multiple Realizability from a Causal Perspective.Cascade versus mechanism: The diversity of causal structure in science.
0:00 - Intro
2:46 - Lauren's background
10:14 - Jim Woodward legacy
15:37 - Golden era of causality
18:56 - Mechanistic explanation
28:51 - Pathways
31:41 - Cascades
36:25 - Topology
41:17 - Constraint
50:44 - Hierarchy of explanations
53:18 - Structure and function
57:49 - Brain and mind
1:01:28 - Reductionism
1:07:58 - Constraint again
1:14:38 - Multiple realizability

Aug 28, 2022 • 1h 26min
BI 145 James Woodward: Causation with a Human Face
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention - intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality - the normative - needs to be studied together with how we actually do think about causal relations in the world - the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures.
Jim's website.Making Things Happen: A Theory of Causal Explanation.Causation with a Human Face: Normative Theory and Descriptive Psychology.
0:00 - Intro
4:14 - Causation with a Human Face & Functionalist approach
6:16 - Interventionist causality; Epistemology and metaphysics
9:35 - Normative and descriptive
14:02 - Rationalist approach
20:24 - Normative vs. descriptive
28:00 - Varying notions of causation
33:18 - Invariance
41:05 - Causality in complex systems
47:09 - Downward causation
51:14 - Natural laws
56:38 - Proportionality
1:01:12 - Intuitions
1:10:59 - Normative and descriptive relation
1:17:33 - Causality across disciplines
1:21:26 - What would help our understanding of causation

Aug 17, 2022 • 1h 12min
BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models
Check out my short video series about what's missing in AI and Neuroscience.
Support the show to get full episodes, full archive, and join the Discord community.
Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.
Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words.
Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more.
EvLab.Emily's website.Twitter: @ev_fedorenko; @emilymbender.Related papersLanguage and thought are not the same thing: Evidence from neuroimaging and neurological patients. (Fedorenko)The neural architecture of language: Integrative modeling converges on predictive processing. (Fedorenko)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender)
0:00 - Intro
4:35 - Language and cognition
15:38 - Grasping for meaning
21:32 - Are large language models producing language?
23:09 - Next-word prediction in brains and models
32:09 - Interface between language and thought
35:18 - Studying language in nonhuman animals
41:54 - Do we understand language enough?
45:51 - What do language models need?
51:45 - Are LLMs teaching us about language?
54:56 - Is meaning necessary, and does it matter how we learn language?
1:00:04 - Is our biology important for language?
1:04:59 - Future outlook

8 snips
Aug 5, 2022 • 1h 25min
BI 143 Rodolphe Sepulchre: Mixed Feedback Control
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics.
Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve Marder: Modulation of NetworksBI 119 Henry Yin: The Crisis in Neuroscience
0:00 - Intro
4:38 - Control engineer
9:52 - Control vs. dynamical systems
13:34 - Building vs. understanding
17:38 - Mixed feedback signals
26:00 - Robustness
28:28 - Eve Marder
32:00 - Loneliness
37:35 - Across levels
44:04 - Neuromorphics and neuromodulation
52:15 - Barrier to adopting neuromorphics
54:40 - Deep learning influence
58:04 - Beyond energy efficiency
1:02:02 - Deep learning for neuro
1:14:15 - Role of philosophy
1:16:43 - Doing it right

Jul 26, 2022 • 1h 43min
BI 142 Cameron Buckner: The New DoGMA
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content - how our thoughts connect to the natural external world.
Cameron's Website.Twitter: @cameronjbuckner.Related papersEmpiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.A Forward-Looking Theory of Content.Other sources Cameron mentions:Innateness, AlphaZero, and Artificial Intelligence (Gary Marcus).Radical Empiricism and Machine Learning Research (Judea Pearl).Fodor’s guide to the Humean mind (Tamás Demeter).
0:00 - Intro
4:55 - Interpreting old philosophy
8:26 - AI and philosophy
17:00 - Empiricism vs. rationalism
27:09 - Domain-general faculties
33:10 - Faculty psychology
40:28 - New faculties?
46:11 - Human faculties
51:15 - Cognitive architectures
56:26 - Language
1:01:40 - Beyond dichotomous thinking
1:04:08 - Lower-level faculties
1:10:16 - Animal cognition
1:14:31 - A Forward-Looking Theory of Content