Brain Inspired

Paul Middlebrooks
undefined
Jul 15, 2021 • 1h 15min

BI NMA 02: Dynamical Systems Panel

Panelists: Adrienne Fairhall.@alfairhall.Bing Brunton.@bingbrunton.Kanaka Rajan.@rajankdr.BI 054 Kanaka Rajan: How Do We Switch Behaviors? This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks. Other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
undefined
Jul 12, 2021 • 1h 27min

BI NMA 01: Machine Learning Panel

Panelists: Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei. This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning. Other panels: Second panel, about linear systems, real neurons, and dynamic networks.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
undefined
Jul 6, 2021 • 1h 25min

BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation

Catherine Stinson is a philosopher focused on AI and neuroscience, and Jessica Thompson, a postdoc in cognitive neuroscience, studies explanation across these fields. They dive into how explanations in neuroscience and AI can be unified. Jessica advocates shifting focus from singular brain areas or models to shared phenomena across both domains. They also discuss the balance between intelligibility and empirical fit in models, the role of philosophy in shaping scientific inquiry, and the importance of interdisciplinary collaboration for innovative research.
undefined
Jun 26, 2021 • 2h 4min

BI 109 Mark Bickhard: Interactivism

Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark's account of representations and how what we represent in our minds is related to the external world - a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn't). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern "encoding" version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle. For related discussions on the foundations (and issues of) representations, check out episode 60 with Michael Rescorla, episode 61 with Jörn Diedrichsen and Niko Kriegeskorte, and especially episode 79 with Romain Brette. Mark's website.Related papersInteractivism: A manifesto.Plenty of other papers available via his website.Also mentioned:The First Half Second The Microgenesis and Temporal Dynamics of Unconscious and Conscious Visual Processes. 2006, Haluk Ögmen, Bruno G. BreitmeyerMaiken Nedergaard's work on sleep. Timestamps 0:00 - Intro 5:06 - Previous and upcoming book 9:17 - Origins of Mark's thinking 14:31 - Process vs. substance metaphysics 27:10 - Kinds of emergence 32:16 - Normative emergence to normative function and representation 36:33 - Representation in Interactivism 46:07 - Situation knowledge 54:02 - Interactivism vs. Enactivism 1:09:37 - Interactivism vs Predictive/Bayesian brain 1:17:39 - Interactivism vs. Free energy principle 1:21:56 - Microgenesis 1:33:11 - Implications for neuroscience 1:38:18 - Learning as variation and selection 1:45:07 - Implications for AI 1:55:06 - Everything is a clock 1:58:14 - Is Mark a philosopher?
undefined
Jun 16, 2021 • 1h 26min

BI 108 Grace Lindsay: Models of the Mind

Grace Lindsay, a computational neuroscientist and author, dives into her book, Models of the Mind, exploring the fusion of physics and neuroscience. She discusses the evolution of AI, the significance of mathematical models, and the philosophical impact of McCulloch and Pitts’ work. Grace highlights the importance of rediscovering old mathematical frameworks like graph theory for modern neuroscience. The conversation also touches on grand unified theories of the brain and the challenges of defining the neural code, blending history with cutting-edge science in a captivating manner.
undefined
Jun 6, 2021 • 1h 29min

BI 107 Steve Fleming: Know Thyself

Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, which animals beyond humans harbor metacognition and how to test it, its role and potential origins in theory of mind and social interaction, how our metacognitive skills develop over our lifetimes, what our metacognitive skill tells us about our other psychological traits, and so on. We also discuss what it might look like when we are able to build metacognitive AI, and whether that's even a good idea. Steve's lab: The MetaLab.Twitter: @smfleming.Steve and Hakwan Lau on episode 99 about consciousness. Papers:Metacognitive training: Domain-General Enhancements of Metacognitive Ability Through Adaptive TrainingThe book:Know Thyself: The Science of Self-Awareness. Timestamps 0:00 - Intro 3:25 - Steve's Career 10:43 - Sub-personal vs. personal metacognition 17:55 - Meditation and metacognition 20:51 - Replay tools for mind-wandering 30:56 - Evolutionary cultural origins of self-awareness 45:02 - Animal metacognition 54:25 - Aging and self-awareness 58:32 - Is more always better? 1:00:41 - Political dogmatism and overconfidence 1:08:56 - Reliance on AI 1:15:15 - Building self-aware AI 1:23:20 - Future evolution of metacognition
undefined
May 27, 2021 • 1h 32min

BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity

Jackie and Bob discuss their research and thinking about curiosity. Jackie's background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and she's broadly interested in how we adapt our ongoing behavior. Curiosity is crucial for this, so she recently has focused on behavioral strategies to exercise curiosity, developing tasks that test exploration, information sampling, uncertainty reduction, and intrinsic motivation. Bob's background is developing computational models of reinforcement learning (including the exploration-exploitation tradeoff) and decision making, and he behavior and neuroimaging data in humans to test the models. He's broadly interested in how and whether we can understand brains and cognition using mathematical models. Recently he's been working on a model for curiosity known as deep exploration, which suggests we make decisions by deeply simulating a handful of scenarios and choosing based on the simulation outcomes. We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI). Jackie's lab: Jacqueline Gottlieb Laboratory at Columbia University.Bob's lab: Neuroscience of Reinforcement Learning and Decision Making.Twitter: Bob: @NRDLab (Jackie's not on twitter).Related papersCuriosity, information demand and attentional priority.Balancing exploration and exploitation with information and randomization.Deep exploration as a unifying account of explore-exploit behavior.Bob mentions an influential talk by Benjamin Van Roy:Generalization and Exploration via Value Function Randomization.Bob mentions his paper with Anne Collins:Ten simple rules for the computational modeling of behavioral data. Timestamps: 0:00 - Intro 4:15 - Central scientific interests 8:32 - Advent of mathematical models 12:15 - Career exploration vs. exploitation 28:03 - Eye movements and active sensing 35:53 - Status of eye movements in neuroscience 44:16 - Why are we curious? 50:26 - Curiosity vs. Exploration vs. Intrinsic motivation 1:02:35 - Directed vs. random exploration 1:06:16 - Deep exploration 1:12:52 - How to know what to pay attention to 1:19:49 - Does AI need curiosity? 1:26:29 - What trait do you wish you had more of?
undefined
May 17, 2021 • 1h 2min

BI 105 Sanjeev Arora: Off the Convex Path

Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn't or shouldn't work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren't rooted in mathematical theory and therefore are a "black box" for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn't share the current neuroscience optimism comparing brains to deep nets. Sanjeev's website.His Research group website.His blog: Off The Convex Path.Papers we discussOn Exact Computation with an Infinitely Wide Neural Net.An Exponential Learning Rate Schedule for Deep LearningRelatedThe episode with Andrew Saxe covers related deep learning theory in episode 52.Omri Barak discusses the importance of learning trajectories to understand RNNs in episode 97.Sanjeev mentions Christos Papadimitriou. Timestamps 0:00 - Intro 7:32 - Computational complexity 12:25 - Algorithms 13:45 - Deep learning vs. traditional optimization 17:01 - Evolving view of deep learning 18:33 - Reproducibility crisis in AI? 21:12 - Surprising effectiveness of deep learning 27:50 - "Optimization" isn't the right framework 30:08 - Infinitely wide nets 35:41 - Exponential learning rates 42:39 - Data as the next frontier 44:12 - Neuroscience and AI differences 47:13 - Focus on algorithms, architecture, and objective functions 55:50 - Advice for deep learning theorists 58:05 - Decoding minds
undefined
11 snips
May 7, 2021 • 1h 51min

BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight

What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its "wild west" days still. We talk about a few creativity studies they've performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John's book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more. John Kounios.Secret Chord Laboratories (David's company).Twitter: @JohnKounios; @NeuroBassDave.John's book (with Mark Beeman) on insight and creativity.The Eureka Factor: Aha Moments, Creative Insight, and the Brain.The papers we discuss or mention:All You Need to Do Is Ask? The Exhortation to Be Creative Improves Creative Performance More for Nonexpert Than Expert Jazz MusiciansAnodal tDCS to Right Dorsolateral Prefrontal Cortex Facilitates Performance for Novice Jazz Improvisers but Hinders ExpertsDual-process contributions to creativity in jazz improvisations: An SPM-EEG study. Timestamps 0:00 - Intro 16:20 - Where are we broadly in science of creativity? 18:23 - Origins of creativity research 22:14 - Divergent and convergent thought 26:31 - Secret Chord Labs 32:40 - Familiar surprise 38:55 - The Eureka Factor 42:27 - Dual process model 52:54 - Creativity and jazz expertise 55:53 - "Be creative" behavioral study 59:17 - Stimulating the creative brain 1:02:04 - Brain circuits underlying creativity 1:14:36 - What does this tell us about creativity? 1:16:48 - Intelligence vs. creativity 1:18:25 - Switching between creative modes 1:25:57 - Flow states and insight 1:34:29 - Creativity and insight in AI 1:43:26 - Creative products vs. process
undefined
Apr 26, 2021 • 1h 27min

BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading

In a fascinating discussion, neuroscientists Ken Hayworth and Randal Koene delve into mind uploading and the potential for whole brain emulation. Ken, known for his work on brain preservation, introduces innovative methods like aldehyde stabilized cryopreservation. Randal advocates for understanding both the scan-and-copy and gradual replacement approaches to achieve substrate-independent minds. They discuss the philosophical and practical implications of these technologies, emphasizing the challenges in converting brain reconstructions into functional models.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app