Brain Inspired

Paul Middlebrooks
undefined
Oct 2, 2021 • 1h 24min

BI 115 Steve Grossberg: Conscious Mind, Resonant Brain

Support the show to get full episodes, full archive, and join the Discord community. Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.  The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György Buzsáki, Jay McClelland, and John Krakauer. Steve's BU website.Conscious Mind, Resonant Brain: How Each Brain Makes a MindPrevious Brain Inspired episode:BI 082 Steve Grossberg: Adaptive Resonance Theory 0:00 - Intro 2:38 - Conscious Mind, Resonant Brain 11:49 - Theoretical method 15:54 - ART, learning, and consciousness 22:58 - Conscious vs. unconscious resonance 26:56 - Györy Buzsáki question 30:04 - Remaining mysteries in visual system 35:16 - John Krakauer question 39:12 - Jay McClelland question 51:34 - Any missing principles to explain human cognition? 1:00:16 - Importance of an early good career start 1:06:50 - Has modeling training caught up to experiment training? 1:17:12 - Universal development code
undefined
Sep 22, 2021 • 1h 38min

BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind

Support the show to get full episodes, full archive, and join the Discord community. Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more. Mark's website.Mazviita's University of Edinburgh page.Twitter (Mark): @msprevak.Mazviita's previous Brain Inspired episode:BI 072 Mazviita Chirimuuta: Understanding, Prediction, and RealityThe related book we discuss:The Routledge Handbook of the Computational Mind 2018 Mark Sprevak Matteo Colombo (Editors) 0:00 - Intro 5:26 - Philosophy contributing to mind science 15:45 - Trend toward hyperspecialization 21:38 - Practice-focused philosophy of science 30:42 - Computationalism 33:05 - Philosophy of mind: identity theory, functionalism 38:18 - Computations as descriptions 41:27 - Pluralism and perspectivalism 54:18 - How much of brain function is computation? 1:02:11 - AI as computationalism 1:13:28 - Naturalizing representations 1:30:08 - Are you doing it right?
undefined
70 snips
Sep 12, 2021 • 1h 31min

BI 113 David Barack and John Krakauer: Two Views On Cognition

Support the show to get full episodes, full archive, and join the Discord community. David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David's perspectives as a practicing neuroscientist and philosopher. David's webpage.John's Lab.Twitter: David: @DLBarackJohn: @blamlabPaper: Two Views on the Cognitive Brain.John's previous episodes:BI 025 John Krakauer: Understanding CognitionBI 077 David and John Krakauer: Part 1BI 078 David and John Krakauer: Part 2 Timestamps 0:00 - Intro 3:13 - David's philosophy and neuroscience experience 20:01 - Renaissance person 24:36 - John's medical training 31:58 - Two Views on the Cognitive Brain 44:18 - Representation 49:37 - Studying populations of neurons 1:05:17 - What counts as representation 1:18:49 - Does this approach matter for AI?
undefined
Sep 2, 2021 • 57min

BI ViDA Panel Discussion: Deep RL and Dopamine

undefined
Aug 26, 2021 • 1h 14min

BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine

BI 112: Ali Mohebi and Ben Engelhard The Many Faces of Dopamine Announcement: Ben has started his new lab and is recruiting grad students. Check out his lab here and apply! Engelhard Lab   Ali and Ben discuss the ever-expanding discoveries about the roles dopamine plays for our cognition. Dopamine is known to play a role in learning – dopamine (DA) neurons fire when our reward expectations aren’t met, and that signal helps adjust our expectation. Roughly, DA corresponds to a reward prediction error. The reward prediction error has helped reinforcement learning in AI develop into a raging success, specially with deep reinforcement learning models trained to out-perform humans in games like chess and Go. But DA likely contributes a lot more to brain function. We discuss many of those possible roles, how to think about computation with respect to neuromodulators like DA, how different time and spatial scales interact, and more. Dopamine: A Simple AND Complex Story  by Daphne Cornelisse Guests Ali Mohebi @mohebial Ben Engelhard Timestamps: 0:00 – Intro 5:02 – Virtual Dopamine Conference 9:56 – History of dopamine’s roles 16:47 – Dopamine circuits 21:13 – Multiple roles for dopamine 31:43 – Deep learning panel discussion 50:14 – Computation and neuromodulation
undefined
Aug 19, 2021 • 1h 21min

BI NMA 06: Advancing Neuro Deep Learning Panel

undefined
Aug 13, 2021 • 1h 24min

BI NMA 05: NLP and Generative Models Panel

BI NMA 05: NLP and Generative Models Panel This is the 5th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the 2nd of 3 in the deep learning series. In this episode, the panelists discuss their experiences “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs). Panelists Brad Wyble. @bradpwyble. Kyunghyun Cho. @kchonyc. He He. @hhexiy. João Sedoc. @JoaoSedoc. The other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning. Second panel, about linear systems, real neurons, and dynamic networks. Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality. Fourth panel, about some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization. Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
undefined
Aug 6, 2021 • 59min

BI NMA 04: Deep Learning Basics Panel

BI NMA 04: Deep Learning Basics Panel This is the 4th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization. Guests Amita Kapoor Lyle Ungar @LyleUngar Surya Ganguli @SuryaGanguli The other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning. Second panel, about linear systems, real neurons, and dynamic networks. Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality. Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs). Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.   Timestamps:  
undefined
Jul 28, 2021 • 1h 38min

BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness

Erik, Kevin, and I discuss... well a lot of things. Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot). Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities. We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence. Kevin's website.Eriks' website.Twitter: @WiringtheBrain (Kevin); @erikphoel (Erik)Books:INNATE – How the Wiring of Our Brains Shapes Who We AreThe RevelationsPapersErikFalsification and consciousness.The emergence of informative higher scales in complex networks.Emergence as the conversion of information: A unifying theory. Timestamps 0:00 - Intro 3:28 - The Revelations - Erik's novel 15:15 - Innate - Kevin's book 22:56 - Cycle of progress 29:05 - Brains for movement or consciousness? 46:46 - Freud's influence 59:18 - Theories of consciousness 1:02:02 - Meaning and emergence 1:05:50 - Reduction in neuroscience 1:23:03 - Micro and macro - emergence 1:29:35 - Agency and intelligence
undefined
Jul 22, 2021 • 1h 1min

BI NMA 03: Stochastic Processes Panel

Panelists: Yael Niv.@yael_nivKonrad Kording@KordingLab.Previous BI episodes:BI 027 Ioana Marinescu & Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam Gershman.@gershbrain.Previous BI episodes:BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?BI 028 Sam Gershman: Free Energy Principle & Human Machines.Tim Behrens.@behrenstim.Previous BI episodes:BI 035 Tim Behrens: Abstracting & Generalizing Knowledge, & Human Replay.BI 024 Tim Behrens: Cognitive Maps. This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality. The other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Second panel, about linear systems, real neurons, and dynamic networks.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app