Brain Inspired

Paul Middlebrooks
undefined
Jun 16, 2021 • 1h 26min

BI 108 Grace Lindsay: Models of the Mind

Grace's websiteTwitter: @neurograce.Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain.We talked about Grace's work using convolutional neural networks to study vision and attention way back on episode 11. Grace and I discuss her new book Models of the Mind, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it's possible to guess a brain function based on what we know about some brain structure, "grand unified theories" of the brain. We also digress and explore topics beyond the book.  Timestamps 0:00 - Intro 4:19 - Cognition beyond vision 12:38 - Models of the Mind - book overview 14:00 - The good and bad of using math 21:33 - I quiz Grace on her own book 25:03 - Birth of AI and computational approach 38:00 - Rediscovering old math for new neuroscience 41:00 - Topology as good math to know now 45:29 - Physics vs. neuroscience math 49:32 - Neural code and information theory 55:03 - Rate code vs. timing code 59:18 - Graph theory - can you deduce function from structure? 1:06:56 - Multiple realizability 1:13:01 - Grand Unified theories of the brain
undefined
Jun 6, 2021 • 1h 29min

BI 107 Steve Fleming: Know Thyself

Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, which animals beyond humans harbor metacognition and how to test it, its role and potential origins in theory of mind and social interaction, how our metacognitive skills develop over our lifetimes, what our metacognitive skill tells us about our other psychological traits, and so on. We also discuss what it might look like when we are able to build metacognitive AI, and whether that's even a good idea. Steve's lab: The MetaLab.Twitter: @smfleming.Steve and Hakwan Lau on episode 99 about consciousness. Papers:Metacognitive training: Domain-General Enhancements of Metacognitive Ability Through Adaptive TrainingThe book:Know Thyself: The Science of Self-Awareness. Timestamps 0:00 - Intro 3:25 - Steve's Career 10:43 - Sub-personal vs. personal metacognition 17:55 - Meditation and metacognition 20:51 - Replay tools for mind-wandering 30:56 - Evolutionary cultural origins of self-awareness 45:02 - Animal metacognition 54:25 - Aging and self-awareness 58:32 - Is more always better? 1:00:41 - Political dogmatism and overconfidence 1:08:56 - Reliance on AI 1:15:15 - Building self-aware AI 1:23:20 - Future evolution of metacognition
undefined
May 27, 2021 • 1h 32min

BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity

Jackie and Bob discuss their research and thinking about curiosity. Jackie's background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and she's broadly interested in how we adapt our ongoing behavior. Curiosity is crucial for this, so she recently has focused on behavioral strategies to exercise curiosity, developing tasks that test exploration, information sampling, uncertainty reduction, and intrinsic motivation. Bob's background is developing computational models of reinforcement learning (including the exploration-exploitation tradeoff) and decision making, and he behavior and neuroimaging data in humans to test the models. He's broadly interested in how and whether we can understand brains and cognition using mathematical models. Recently he's been working on a model for curiosity known as deep exploration, which suggests we make decisions by deeply simulating a handful of scenarios and choosing based on the simulation outcomes. We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI). Jackie's lab: Jacqueline Gottlieb Laboratory at Columbia University.Bob's lab: Neuroscience of Reinforcement Learning and Decision Making.Twitter: Bob: @NRDLab (Jackie's not on twitter).Related papersCuriosity, information demand and attentional priority.Balancing exploration and exploitation with information and randomization.Deep exploration as a unifying account of explore-exploit behavior.Bob mentions an influential talk by Benjamin Van Roy:Generalization and Exploration via Value Function Randomization.Bob mentions his paper with Anne Collins:Ten simple rules for the computational modeling of behavioral data. Timestamps: 0:00 - Intro 4:15 - Central scientific interests 8:32 - Advent of mathematical models 12:15 - Career exploration vs. exploitation 28:03 - Eye movements and active sensing 35:53 - Status of eye movements in neuroscience 44:16 - Why are we curious? 50:26 - Curiosity vs. Exploration vs. Intrinsic motivation 1:02:35 - Directed vs. random exploration 1:06:16 - Deep exploration 1:12:52 - How to know what to pay attention to 1:19:49 - Does AI need curiosity? 1:26:29 - What trait do you wish you had more of?
undefined
May 17, 2021 • 1h 2min

BI 105 Sanjeev Arora: Off the Convex Path

Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn't or shouldn't work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren't rooted in mathematical theory and therefore are a "black box" for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn't share the current neuroscience optimism comparing brains to deep nets. Sanjeev's website.His Research group website.His blog: Off The Convex Path.Papers we discussOn Exact Computation with an Infinitely Wide Neural Net.An Exponential Learning Rate Schedule for Deep LearningRelatedThe episode with Andrew Saxe covers related deep learning theory in episode 52.Omri Barak discusses the importance of learning trajectories to understand RNNs in episode 97.Sanjeev mentions Christos Papadimitriou. Timestamps 0:00 - Intro 7:32 - Computational complexity 12:25 - Algorithms 13:45 - Deep learning vs. traditional optimization 17:01 - Evolving view of deep learning 18:33 - Reproducibility crisis in AI? 21:12 - Surprising effectiveness of deep learning 27:50 - "Optimization" isn't the right framework 30:08 - Infinitely wide nets 35:41 - Exponential learning rates 42:39 - Data as the next frontier 44:12 - Neuroscience and AI differences 47:13 - Focus on algorithms, architecture, and objective functions 55:50 - Advice for deep learning theorists 58:05 - Decoding minds
undefined
11 snips
May 7, 2021 • 1h 51min

BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight

What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its "wild west" days still. We talk about a few creativity studies they've performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John's book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more. John Kounios.Secret Chord Laboratories (David's company).Twitter: @JohnKounios; @NeuroBassDave.John's book (with Mark Beeman) on insight and creativity.The Eureka Factor: Aha Moments, Creative Insight, and the Brain.The papers we discuss or mention:All You Need to Do Is Ask? The Exhortation to Be Creative Improves Creative Performance More for Nonexpert Than Expert Jazz MusiciansAnodal tDCS to Right Dorsolateral Prefrontal Cortex Facilitates Performance for Novice Jazz Improvisers but Hinders ExpertsDual-process contributions to creativity in jazz improvisations: An SPM-EEG study. Timestamps 0:00 - Intro 16:20 - Where are we broadly in science of creativity? 18:23 - Origins of creativity research 22:14 - Divergent and convergent thought 26:31 - Secret Chord Labs 32:40 - Familiar surprise 38:55 - The Eureka Factor 42:27 - Dual process model 52:54 - Creativity and jazz expertise 55:53 - "Be creative" behavioral study 59:17 - Stimulating the creative brain 1:02:04 - Brain circuits underlying creativity 1:14:36 - What does this tell us about creativity? 1:16:48 - Intelligence vs. creativity 1:18:25 - Switching between creative modes 1:25:57 - Flow states and insight 1:34:29 - Creativity and insight in AI 1:43:26 - Creative products vs. process
undefined
Apr 26, 2021 • 1h 27min

BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading

Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The "scan and copy" approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The "gradual replacement" approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind. Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, with current technology, to advance toward that lofty goal, and who are thoughtful about what steps we need to take to enable further advancements. Randal A KoeneTwitter: @randalkoeneCarboncopies Foundation.Randal's website.Ken HayworthTwitter: @KennethHayworthBrain Preservation Foundation.Youtube videos. Timestamps 0:00 - Intro 6:14 - What Ken wants 11:22 - What Randal wants 22:29 - Brain preservation 27:18 - Aldehyde stabilized cryopreservation 31:51 - Scan and copy vs. gradual replacement 38:25 - Building a roadmap 49:45 - Limits of current experimental paradigms 53:51 - Our evolved brains 1:06:58 - Counterarguments 1:10:31 - Animal models for whole brain emulation 1:15:01 - Understanding vs. emulating brains 1:22:37 - Current challenges
undefined
Apr 16, 2021 • 1h 32min

BI 102 Mark Humphries: What Is It Like To Be A Spike?

Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone's life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes,  how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we're prediction machines!). A fun read and discussion. This is Mark's second time on the podcast - he was on episode 4 in the early days, talking more in depth about some of the work we discuss in this episode! The Humphries Lab.Twitter: @markdhumphriesBook: The Spike: An Epic Journey Through the Brain in 2.1 Seconds.Related papersA spiral attractor network drives rhythmic locomotion. Timestamps: 0:00 - Intro 3:25 - Writing a book 15:37 - Mark's main interest 19:41 - Future explanation of brain/mind 27:00 - Stochasticity and excitation/inhibition balance 36:56 - Dendritic computation for network dynamics 39:10 - Do details matter for AI? 44:06 - Spike failure 51:12 - Dark neurons 1:07:57 - Intrinsic spontaneous activity 1:16:16 - Best scientific moment 1:23:58 - Failure 1:28:45 - Advice
undefined
Apr 6, 2021 • 1h 45min

BI 101 Steve Potter: Motivating Brains In and Out of Dishes

Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length in his previous episode). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book. The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, mind-uploading, synaptic plasticity, and more. Then we discuss many of the stories and lessons from his book, which I recommend for teachers, mentors, and life-long students who want to ensure they're optimizing their own  learning. Potter Lab.Twitter: @stevempotter.The Book: How to Motivate Your Students to Love Learning.The glial cell activity movie. 0:00 - Intro 6:38 - Brain organoids 18:48 - Glial cell plasticity 24:50 - Whole brain emulation 35:28 - Industry vs. academia 45:32 - Intro to book: How To Motivate Your Students To Love Learning 48:29 - Steve's childhood influences 57:21 - Developing one's own intrinsic motivation 1:02:30 - Real-world assignments 1:08:00 - Keys to motivation 1:11:50 - Peer pressure 1:21:16 - Autonomy 1:25:38 - Wikipedia real-world assignment 1:33:12 - Relation to running a lab
undefined
Mar 28, 2021 • 50min

BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?

We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you've enjoyed the collections as well. If you're wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired's magnificent Patreon supporters (thanks guys!!!!). The final question I sent to previous guests: Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not? Timestamps: 0:00 - Intro 5:04 - Andrew Saxe 7:04 - Thomas Naselaris 7:46 - John Krakauer 9:03 - Federico Turkheimer 11:57 - Steve Potter 13:31 - David Krakauer 17:22 - Dean Buonomano 20:28 - Konrad Kording 22:00 - Uri Hasson 23:15 - Rodrigo Quian Quiroga 24:41 - Jim DiCarlo 25:26 - Marcel van Gerven 28:02 - Mazviita Chirimuuta 29:27 - Brad Love 31:23 - Patrick Mayo 32:30 - György Buzsáki 37:07 - Pieter Roelfsema 37:26 - David Poeppel 40:22 - Paul Cisek 44:52 - Talia Konkle 47:03 - Steve Grossberg
undefined
Mar 21, 2021 • 1h 4min

BI 100.4 Special: What Ideas Are Holding Us Back?

In the 4th installment of our 100th episode celebration, previous guests responded to the question: What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why? As usual, the responses are varied and wonderful! Timestamps: 0:00 - Intro 6:41 - Pieter Roelfsema 7:52 - Grace Lindsay 10:23 - Marcel van Gerven 11:38 - Andrew Saxe 14:05 - Jane Wang 16:50 - Thomas Naselaris 18:14 - Steve Potter 19:18 - Kendrick Kay 22:17 - Blake Richards 27:52 - Jay McClelland 30:13 - Jim DiCarlo 31:17 - Talia Konkle 33:27 - Uri Hasson 35:37 - Wolfgang Maass 38:48 - Paul Cisek 40:41 - Patrick Mayo 41:51 - Konrad Kording 43:22 - David Poeppel 44:22 - Brad Love 46:47 - Rodrigo Quian Quiroga 47:36 - Steve Grossberg 48:47 - Mark Humphries 52:35 - John Krakauer 55:13 - György Buzsáki 59:50 - Stefan Leijnan 1:02:18 - Nathaniel Daw

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app