Brain Inspired cover image

Brain Inspired

Latest episodes

undefined
Apr 27, 2022 • 1h 26min

BI 134 Mandyam Srinivasan: Bee Flight and Cognition

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience. Srini's Website.Related papersVision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics. 0:00 - Intro 3:34 - Background 8:20 - Bee experiments 14:30 - Bee flight and navigation 28:05 - Landing 33:06 - Umwelt and perception 37:26 - Bee-inspired aerial robotics 49:10 - Motion camouflage 51:52 - Cognition in bees 1:03:10 - Small vs. big brains 1:06:42 - Pain in bees 1:12:50 - Subjective experience 1:15:25 - Deep learning 1:23:00 - Path forward
undefined
4 snips
Apr 15, 2022 • 1h 29min

BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning. Ken's Cognitive Neuroscience Laboratory.Twitter: @kap101.The Lucid Dreaming App.Related papersMemory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better.Does memory reactivation during sleep support generalization at the cost of memory specifics?Real-time dialogue between experimenters and dreamers during REM sleep. 0:00 - Intro 2:48 - Background and types of memory 14:44 -Consciousness and memory 23:32 - Phases and sleep and wakefulness 28:19 - Sleep, memory, and learning 33:50 - Targeted memory reactivation 48:34 - Problem solving during sleep 51:50 - 2-way communication with lucid dreamers 1:01:43 - Confounds to the paradigm 1:04:50 - Limitations and future studies 1:09:35 - Lucid dreaming app 1:13:47 - How sleep can inform AI 1:20:18 - Advice for students
undefined
Apr 3, 2022 • 1h 17min

BI 132 Ila Fiete: A Grid Scaffold for Memory

Announcement: I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here. Support the show to get full episodes, full archive, and join the Discord community. Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework. The Fiete Lab.Related papersA structured scaffold underlies activity in the hippocampus.Attractor and integrator networks in the brain. 0:00 - Intro 3:36 - "Neurophysicist" 9:30 - Bottom-up vs. top-down 15:57 - Tool scavenging 18:21 - Cognitive maps and hippocampus 22:40 - Hopfield networks 27:56 - Internal scaffold 38:42 - Place cells 43:44 - Grid cells 54:22 - Grid cells encoding place cells 59:39 - Scaffold model: stacked hopfield networks 1:05:39 - Attractor landscapes 1:09:22 - Landscapes across scales 1:12:27 - Dimensionality of landscapes
undefined
Mar 26, 2022 • 1h 27min

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

Support the show to get full episodes, full archive, and join the Discord community. Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs". Neural Circuits Laboratory.Twitter: Sri: @srikipedia; Jie: @neuro_Mei.Related papersInforming deep neural networks by multiscale principles of neuromodulatory systems. 0:00 - Intro 3:10 - Background 9:19 - Bottom-up vs. top-down 14:42 - Levels of abstraction 22:46 - Biological neuromodulation 33:18 - Inventing neuromodulators 41:10 - How far along are we? 53:31 - Multiple realizability 1:09:40 -Modeling dendrites 1:15:24 - Across-species neuromodulation
undefined
Mar 13, 2022 • 1h 1min

BI 130 Eve Marder: Modulation of Networks

Support the show to get full episodes, full archive, and join the Discord community. Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains. The Marder Lab.Twitter: @MarderLab.Related to our conversation:Understanding Brains: Details, Intuition, and Big Data.Emerging principles governing the operation of neural networks (Eve mentions this regarding "building blocks" of neural networks). 0:00 - Intro 3:58 - Background 8:00 - Levels of ambiguity 9:47 - Stomatogastric nervous system 17:13 - Structure vs. function 26:08 - Role of theory 34:56 - Technology vs. understanding 38:25 - Higher cognitive function 44:35 - Adaptability, resilience, evolution 50:23 - Climate change 56:11 - Deep learning 57:12 - Dynamical systems
undefined
Mar 2, 2022 • 1h 21min

BI 129 Patryk Laurent: Learning from the Real World

Support the show to get full episodes, full archive, and join the Discord community. Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world. Patryk's homepage.Twitter: @paklnet.Related papersUnsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network. 0:00 - Intro 2:22 - Patryk's background 8:37 - Importance of diverse skills 16:14 - What is intelligence? 20:34 - Important brain principles 22:36 - Learning from the real world 35:09 - Language models 42:51 - AI contribution to neuroscience 48:22 - Criteria for "real" AI 53:11 - Neuroscience for AI 1:01:20 - What can we ignore about brains? 1:11:45 - Advice to past self
undefined
Feb 20, 2022 • 1h 26min

BI 128 Hakwan Lau: In Consciousness We Trust

Support the show to get full episodes, full archive, and join the Discord community. Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness. Hakwan's lab: Consciousness and Metacognition Lab.Twitter: @hakwanlau.Book:In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. 0:00 - Intro 4:37 - In Consciousness We Trust 12:19 - Too many consciousness theories? 19:26 - Philosophy and neuroscience of consciousness 29:00 - Local vs. global theories 31:20 - Perceptual reality monitoring and GANs 42:43 - Functions of consciousness 47:17 - Mental quality space 56:44 - Cognitive maps 1:06:28 - Performance capacity confounds 1:12:28 - Blindsight 1:19:11 - Philosophy vs. empirical work
undefined
Feb 10, 2022 • 1h 43min

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Support the show to get full episodes, full archive, and join the Discord community. Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: BI 126 Randy Gallistel: Where Is the Engram? Ryan Lab.Twitter: @TJRyan_77.Related papersEngram cell connectivity: an evolving substrate for information storage.Forgetting as a form of adaptive engram cell plasticity.Memory and Instinct as a Continuum of Information Storage in The Cognitive Neurosciences.The Bandwagon by Claude Shannon. 0:00 - Intro 4:05 - Response to Randy Gallistel 10:45 - Computation in the brain 14:52 - Instinct and memory 19:37 - Dynamics of memory 21:55 - Wiring vs. connection strength plasticity 24:16 - Changing one's mind 33:09 - Optogenetics and memory experiments 47:24 - Forgetting as learning 1:06:35 - Folk psychological terms 1:08:49 - Memory becoming instinct 1:21:49 - Instinct across the lifetime 1:25:52 - Boundaries of memories 1:28:52 - Subjective experience of memory 1:31:58 - Interdisciplinary research 1:37:32 - Communicating science
undefined
Jan 31, 2022 • 1h 20min

BI 126 Randy Gallistel: Where Is the Engram?

Support the show to get full episodes, full archive, and join the Discord community. Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views. Randy's Rutger's website.Book:Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.Related papers:The theoretical RNA paper Randy mentions: An RNA-based theory of natural universal computation.Evidence for intracellular engram in cerebellum: Memory trace and timing mechanism localized to cerebellar Purkinje cells.The exchange between Randy and John Lisman.The blog post Randy mentions about Universal function approximation:The Truth About the [Not So] Universal Approximation Theorem 0:00 - Intro 6:50 - Cognitive science vs. computational neuroscience 13:23 - Brain as computing device 15:45 - Noam Chomsky's influence 17:58 - Memory must be stored within cells 30:58 - Theoretical support for the idea 34:15 - Cerebellum evidence supporting the idea 40:56 - What is the write mechanism? 51:11 - Thoughts on deep learning 1:00:02 - Multiple memory mechanisms? 1:10:56 - The role of plasticity 1:12:06 - Trying to convince molecular biologists
undefined
Jan 19, 2022 • 1h 11min

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

Support the show to get full episodes, full archive, and join the Discord community. Doris, Tony, and Blake are the organizers for this year's NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence. From Neuroscience to Artificially Intelligent Systems (NAISys).Doris:@doristsao.Tsao Lab.Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons.Tony:@TonyZador.Zador Lab.A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains.Blake:@tyrell_turing.The Learning in Neural Circuits Lab.The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning. 0:00 - Intro 4:16 - Tony Zador 5:38 - Doris Tsao 10:44 - Blake Richards 15:46 - Deductive, inductive, abductive inference 16:32 - NAISys 33:09 - Evolution, development, learning 38:23 - Learning: plasticity vs. dynamical structures 54:13 - Different kinds of understanding 1:03:05 - Do we understand evolution well enough? 1:04:03 - Neuro-AI fad? 1:06:26 - Are your problems bigger or smaller now?

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app