

Brain Inspired
Paul Middlebrooks
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Episodes
Mentioned books

Aug 26, 2021 • 1h 14min
BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine
BI 112:
Ali Mohebi and Ben Engelhard
The Many Faces of Dopamine
Announcement:
Ben has started his new lab and is recruiting grad students.
Check out his lab here and apply!
Engelhard Lab
Ali and Ben discuss the ever-expanding discoveries about the roles dopamine plays for our cognition. Dopamine is known to play a role in learning – dopamine (DA) neurons fire when our reward expectations aren’t met, and that signal helps adjust our expectation. Roughly, DA corresponds to a reward prediction error. The reward prediction error has helped reinforcement learning in AI develop into a raging success, specially with deep reinforcement learning models trained to out-perform humans in games like chess and Go. But DA likely contributes a lot more to brain function. We discuss many of those possible roles, how to think about computation with respect to neuromodulators like DA, how different time and spatial scales interact, and more.
Dopamine: A Simple AND Complex Story
by Daphne Cornelisse
Guests
Ali Mohebi
@mohebial
Ben Engelhard
Timestamps:
0:00 – Intro
5:02 – Virtual Dopamine Conference
9:56 – History of dopamine’s roles
16:47 – Dopamine circuits
21:13 – Multiple roles for dopamine
31:43 – Deep learning panel discussion
50:14 – Computation and neuromodulation

Aug 19, 2021 • 1h 21min
BI NMA 06: Advancing Neuro Deep Learning Panel

Aug 13, 2021 • 1h 24min
BI NMA 05: NLP and Generative Models Panel
BI NMA 05:
NLP and Generative Models Panel
This is the 5th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the 2nd of 3 in the deep learning series. In this episode, the panelists discuss their experiences “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).
Panelists
Brad Wyble.
@bradpwyble.
Kyunghyun Cho.
@kchonyc.
He He.
@hhexiy.
João Sedoc.
@JoaoSedoc.
The other panels:
First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.
Second panel, about linear systems, real neurons, and dynamic networks.
Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.
Fourth panel, about some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.
Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

Aug 6, 2021 • 59min
BI NMA 04: Deep Learning Basics Panel
BI NMA 04:
Deep Learning Basics Panel
This is the 4th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.
Guests
Amita Kapoor
Lyle Ungar
@LyleUngar
Surya Ganguli
@SuryaGanguli
The other panels:
First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.
Second panel, about linear systems, real neurons, and dynamic networks.
Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.
Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).
Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
Timestamps:

Jul 28, 2021 • 1h 38min
BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness
Erik, Kevin, and I discuss... well a lot of things.
Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot).
Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities.
We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence.
Kevin's website.Eriks' website.Twitter: @WiringtheBrain (Kevin); @erikphoel (Erik)Books:INNATE – How the Wiring of Our Brains Shapes Who We AreThe RevelationsPapersErikFalsification and consciousness.The emergence of informative higher scales in complex networks.Emergence as the conversion of information: A unifying theory.
Timestamps
0:00 - Intro
3:28 - The Revelations - Erik's novel
15:15 - Innate - Kevin's book
22:56 - Cycle of progress
29:05 - Brains for movement or consciousness?
46:46 - Freud's influence
59:18 - Theories of consciousness
1:02:02 - Meaning and emergence
1:05:50 - Reduction in neuroscience
1:23:03 - Micro and macro - emergence
1:29:35 - Agency and intelligence

Jul 22, 2021 • 1h 1min
BI NMA 03: Stochastic Processes Panel
Panelists:
Yael Niv.@yael_nivKonrad Kording@KordingLab.Previous BI episodes:BI 027 Ioana Marinescu & Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam Gershman.@gershbrain.Previous BI episodes:BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?BI 028 Sam Gershman: Free Energy Principle & Human Machines.Tim Behrens.@behrenstim.Previous BI episodes:BI 035 Tim Behrens: Abstracting & Generalizing Knowledge, & Human Replay.BI 024 Tim Behrens: Cognitive Maps.
This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.
The other panels:
First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Second panel, about linear systems, real neurons, and dynamic networks.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

Jul 15, 2021 • 1h 15min
BI NMA 02: Dynamical Systems Panel
Panelists:
Adrienne Fairhall.@alfairhall.Bing Brunton.@bingbrunton.Kanaka Rajan.@rajankdr.BI 054 Kanaka Rajan: How Do We Switch Behaviors?
This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks.
Other panels:
First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

Jul 12, 2021 • 1h 27min
BI NMA 01: Machine Learning Panel
Panelists:
Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei.
This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.
Other panels:
Second panel, about linear systems, real neurons, and dynamic networks.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

Jul 6, 2021 • 1h 25min
BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation
Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects - like a brain area or a deep learning model - to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt's conception of scientific understanding and its relation to explanation (they're different!), and plenty more.
Catherine's website.Jessica's blog.Twitter: Jess: @tsonj.Related papersFrom Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence - CatherineForms of explanation and understanding for neuroscience and artificial intelligence - JessJess is a postdoc in Chris Summerfield's lab, and Chris and San Gershman were on a recent episode.Understanding Scientific Understanding by Henk de Regt.
Timestamps:
0:00 - Intro
11:11 - Background and approaches
27:00 - Understanding distinct from explanation
36:00 - Explanations as programs (early explanation)
40:42 - Explaining classes of phenomena
52:05 - Constitutive (neuro) vs. etiological (AI) explanations
1:04:04 - Do nonphysical objects count for explanation?
1:10:51 - Advice for early philosopher/scientists

Jun 26, 2021 • 2h 4min
BI 109 Mark Bickhard: Interactivism
Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark's account of representations and how what we represent in our minds is related to the external world - a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn't). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern "encoding" version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle.
For related discussions on the foundations (and issues of) representations, check out episode 60 with Michael Rescorla, episode 61 with Jörn Diedrichsen and Niko Kriegeskorte, and especially episode 79 with Romain Brette.
Mark's website.Related papersInteractivism: A manifesto.Plenty of other papers available via his website.Also mentioned:The First Half Second The Microgenesis and Temporal Dynamics of Unconscious and Conscious Visual Processes. 2006, Haluk Ögmen, Bruno G. BreitmeyerMaiken Nedergaard's work on sleep.
Timestamps
0:00 - Intro
5:06 - Previous and upcoming book
9:17 - Origins of Mark's thinking
14:31 - Process vs. substance metaphysics
27:10 - Kinds of emergence
32:16 - Normative emergence to normative function and representation
36:33 - Representation in Interactivism
46:07 - Situation knowledge
54:02 - Interactivism vs. Enactivism
1:09:37 - Interactivism vs Predictive/Bayesian brain
1:17:39 - Interactivism vs. Free energy principle
1:21:56 - Microgenesis
1:33:11 - Implications for neuroscience
1:38:18 - Learning as variation and selection
1:45:07 - Implications for AI
1:55:06 - Everything is a clock
1:58:14 - Is Mark a philosopher?