
Brain Inspired
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Latest episodes

Dec 11, 2023 • 1h 29min
BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding
Panel discussion on using neuroscience technologies to decode memory from connectomes, featuring a group of experts including Kenneth Hayworth. Topics include advancements in connectomics, decoding memory and connectomes, analyzing connectome complexity, the role of molecules, deep learning parallelism, studying connectome data with cultured neurons, understanding neuronal interactions, and the rules of connectome interpretation.

Nov 27, 2023 • 1h 39min
BI 179 Laura Gradowski: Include the Fringe with Pluralism
Laura Gradowski, a philosopher of science at the University of Pittsburgh, discusses the importance of scientific pluralism and the inclusion of fringe theories in science. She cites historical examples, including the Garcia effect, that challenge mainstream theories and highlight the need for tolerance and diversity in scientific research. The podcast explores various topics such as the transition of fringe ideas to mainstream acceptance, the validation of traditional ecological knowledge, and the role of constraints in generating movement and thoughts. It also delves into the concept of the 'no end principle' and the continuous exploration of new ideas in science.

4 snips
Nov 13, 2023 • 1h 36min
BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions
Eric Shea-Brown, a theoretical neuroscientist, discusses dynamics and dimensionality in neural networks, exploring how they change during tasks. He highlights research findings on structural connection motifs and dimensionalities related to different modes of learning. The podcast also covers the impact of model architectures on neural dynamics, the complexity of the biological brain, and the concept of rich brain vs lazy brain. The chapter on paths and motifs in neural networks showcases a student's prediction abilities. Finally, the guest expresses desires for advancements in neuroscience and support for the podcast.

Oct 30, 2023 • 1h 14min
BI 177 Special: Bernstein Workshop Panel
Support the show to get full episodes, full archive, and join the Discord community.
I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion!
Program: How can machine learning be used to generate insights and theories in neuroscience?
Panelists:
Katrin Franke
Lab website.
Twitter: @kfrankelab.
Ralf Haefner
Haefner lab.
Twitter: @haefnerlab.
Martin Hebart
Hebart Lab.
Twitter: @martin_hebart.
Johannes Jaeger
Yogi's website.
Twitter: @yoginho.
Fred Wolf
Fred's university webpage.
Organizers:
Alexander Ecker | University of Göttingen, Germany
Fabian Sinz | University of Göttingen, Germany
Mohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | University of Göttingen, Germany

6 snips
Oct 14, 2023 • 1h 24min
BI 176 David Poeppel Returns
David Poeppel, researcher studying auditory cognition, speech perception, language, and music at NYU, returns to discuss the mysteries of memory storage, the language of thought hypothesis, and the pace of scientific progress in understanding the brain. They explore the challenges of studying memory, the implementation requirements for language processing, and the potential combination of symbolic computation and dynamics in the brain. They also delve into the downside of unprincipled data mining and the re-emergence of the language of thought hypothesis in cognitive organization.

11 snips
Oct 3, 2023 • 1h 47min
BI 175 Kevin Mitchell: Free Agents
Kevin Mitchell, Professor of genetics at Trinity College Dublin, discusses his new book 'Free Agents: How Evolution Gave Us Free Will'. Topics include the origin of agency, complexity of free will, indeterminacy in the universe, harnessing brain's randomness, creativity, and artificial free will.

42 snips
Sep 13, 2023 • 1h 45min
BI 174 Alicia Juarrero: Context Changes Everything
Philosopher Alicia Juarrero discusses the importance of constraints in understanding complex systems like the brain. They explore the concept of context and its impact on human behavior. The conversation touches on emergent properties, culture, and the role of top-down constraints. They also delve into embodied cognition, brain activity, dimensionality, the difference between mind and brain, and the concept of wisdom. Exploring hope, protein folding, and higher levels of organization and constraints conclude the podcast.

Aug 30, 2023 • 1h 36min
BI 173 Justin Wood: Origins of Visual Intelligence
In this podcast, Justin Wood discusses his work comparing the visual cognition of newborn chicks and AI models. He uses controlled-rearing techniques to understand visual intelligence and build systems that emulate biological organisms. They explore topics like object recognition, reverse engineering, collective behavior, and the potential of transformers in cognitive science.

10 snips
Aug 7, 2023 • 1h 31min
BI 172 David Glanzman: Memory All The Way Down
Support the show to get full episodes, full archive, and join the Discord community.
David runs his lab at UCLA where he's also a distinguished professor. David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons. So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it's like studying non-mainstream topic, including challenges trying to get funded for it, and so on.
David's Faculty Page.
Related papers
The central importance of nuclear mechanisms in the storage of memory.
David mentions Arc and virus-like transmission:
The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer.
Structure of an Arc-ane virus-like capsid.
David mentions many of the ideas from the Pushing the Boundaries: Neuroscience, Cognition, and Life Symposium.
Related episodes:
BI 126 Randy Gallistel: Where Is the Engram?
BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Jul 22, 2023 • 1h 25min
BI 171 Mike Frank: Early Language and Cognition
Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.
We discuss that, his love for developing open data sets that anyone can use,
The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches
How early language learning in children differs from LLM learning
Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.
Language & Cognition Lab
Twitter: @mcxfrank.
I mentioned Mike's tweet thread about saying LLMs "have" cognitive functions:
Related papers:
Pragmatic language interpretation as probabilistic inference.
Toward a “Standard Model” of Early Language Learning.
The pervasive role of pragmatics in early language.
The Structure of Developmental Variation in Early Childhood.
Relational reasoning and generalization using non-symbolic neural networks.
Unsupervised neural network models of the ventral visual stream.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.