Brain Inspired cover image

Brain Inspired

Latest episodes

undefined
Jul 22, 2023 • 1h 25min

BI 171 Mike Frank: Early Language and Cognition

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition. We discuss that, his love for developing open data sets that anyone can use, The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches How early language learning in children differs from LLM learning Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue. Language & Cognition Lab Twitter: @mcxfrank. I mentioned Mike's tweet thread about saying LLMs "have" cognitive functions: Related papers: Pragmatic language interpretation as probabilistic inference. Toward a “Standard Model” of Early Language Learning. The pervasive role of pragmatics in early language. The Structure of Developmental Variation in Early Childhood. Relational reasoning and generalization using non-symbolic neural networks. Unsupervised neural network models of the ventral visual stream.
undefined
Jul 11, 2023 • 1h 17min

BI 170 Ali Mohebi: Starting a Research Lab

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future. Ali's website. Twitter: @mohebial
undefined
Jun 28, 2023 • 1h 42min

BI 169 Andrea Martin: Neural Dynamics and Language

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains. Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over  time. One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more. Andrea's website. Twitter: @andrea_e_martin. Related papers A Compositional Neural Architecture for Language An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions Neural dynamics differentially encode phrases and sentences during spoken language comprehension Hierarchical structure in language and action: A formal comparison Andrea mentions this book: The Geometry of Biological Time.
undefined
Jun 2, 2023 • 1h 55min

BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives? Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives. This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion! AWARE: Glimpses of Consciousness Umbrella Films 0:00 - Intro 19:42 - Mechanistic reductionism 45:33 - Changing views during lifetime 53:49 - Did making the film alter your views? 57:49 - ChatGPT 1:04:20 - Materialist assumption 1:11:00 - Science of consciousness 1:20:49 - Transhumanism 1:32:01 - Integrity 1:36:19 - Aesthetics 1:39:50 - Response to the film
undefined
May 27, 2023 • 1h 28min

BI 167 Panayiota Poirazi: AI Brains Need Dendrites

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.  In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward. Poirazi Lab Twitter: @YiotaPoirazi. Related papers Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks. Illuminating dendritic function with computational models. Introducing the Dendrify framework for incorporating dendrites to spiking neural networks. Pyramidal Neuron as Two-Layer Neural Network 0:00 - Intro 3:04 - Yiota's background 6:40 - Artificial networks and dendrites 9:24 - Dendrites special sauce? 14:50 - Where are we in understanding dendrite function? 20:29 - Algorithms, plasticity, and brains 29:00 - Functional unit of the brain 42:43 - Engrams 51:03 - Dendrites and nonlinearity 54:51 - Spiking neural networks 56:02 - Best level of biological detail 57:52 - Dendrify 1:05:41 - Experimental work 1:10:58 - Dendrites across species and development 1:16:50 - Career reflection 1:17:57 - Evolution of Yiota's thinking
undefined
8 snips
May 9, 2023 • 1h 27min

BI 166 Nick Enfield: Language vs. Reality

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go. For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!"  In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately. From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship between language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language. Nick's website Twitter: @njenfield Book: Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. Papers: Linguistic concepts are self-generating choice architectures 0:00 - Intro 4:23 - Is learning about language important? 15:43 - Linguistic Anthropology 28:56 - Language and truth 33:57 - How special is language 46:19 - Choice architecture and framing 48:19 - Language for thinking or communication 52:30 - Agency and language 56:51 - Large language models 1:16:18 - Getting language right 1:20:48 - Social relationships and language
undefined
7 snips
Apr 12, 2023 • 1h 39min

BI 165 Jeffrey Bowers: Psychology Gets No Respect

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work. However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more. Website Twitter: @jeffrey_bowers Related papers: Deep Problems with Neural Network Models of Human Vision. Parallel Distributed Processing Theory in the Age of Deep Networks. Successes and critical failures of neural networks in capturing human-like speech recognition. 0:00 - Intro 3:52 - Testing neural networks 5:35 - Neuro-AI needs psychology 23:36 - Experiments in AI and neuroscience 23:51 - Why build networks like our minds? 44:55 - Vision problem spaces, solution spaces, training data 55:45 - Do we implement algorithms? 1:01:33 - Relational and combinatorial cognition 1:06:17 - Comparing representations in different networks 1:12:31 - Large language models 1:21:10 - Teaching LLMs nonsense languages
undefined
Apr 1, 2023 • 1h 32min

BI 164 Gary Lupyan: How Language Affects Thought

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we  partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics. And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test. Lupyan Lab. Twitter: @glupyan. Related papers: Hidden Differences in Phenomenal Experience. Verbal interference paradigms: A systematic review investigating the role of language in cognition. Gary mentioned Richard Feynman's Ways of Thinking video. Gary and Andy Clark's Aeon article: Super-cooperators. 0:00 - Intro 2:36 - Words and communication 14:10 - Phenomenal variability 26:24 - Co-operating minds 38:11 - Large language models 40:40 - Neuro-symbolic AI, scale 44:43 - How LLMs have changed Gary's thoughts about language 49:26 - Meaning, grounding, and language 54:26 - Development of language 58:53 - Symbols and emergence 1:03:20 - Language evolution in the LLM era 1:08:05 - Concepts 1:11:17 - How special is language? 1:18:08 - AGI
undefined
Mar 20, 2023 • 1h 22min

BI 163 Ellie Pavlick: The Mind of a Language Model

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more. Language Understanding and Representation Lab Twitter: @Brown_NLP Related papers Semantic Structure in Deep Learning. Pretraining on Interactions for Learning Grounded Affordance Representations. Mapping Language Models to Grounded Conceptual Spaces. 0:00 - Intro 2:34 - Will LLMs make us dumb? 9:01 - Evolution of language 17:10 - Changing views on language 22:39 - Semantics, grounding, meaning 37:40 - LLMs, humans, and prediction 41:19 - How to evaluate LLMs 51:08 - Structure, semantics, and symbols in models 1:00:08 - Dimensionality 1:02:08 - Limitations of LLMs 1:07:47 - What do linguists think? 1:14:23 - What is language for?
undefined
25 snips
Mar 8, 2023 • 1h 23min

BI 162 Earl K. Miller: Thoughts are an Emergent Property

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition. Recently on BI we've discussed oscillations quite a bit. In episode 153, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument.  In episode 160, Ole Jensen discussed his work in humans showing that  low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl's work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics. Miller lab. Twitter: @MillerLabMIT. Related papers: An integrative theory of prefrontal cortex function. Annual Review of Neuroscience. Working Memory Is Complex and Dynamic, Like Your Thoughts. Traveling waves in the prefrontal cortex during working memory. 0:00 - Intro 6:22 - Evolution of Earl's thinking 14:58 - Role of the prefrontal cortex 25:21 - Spatial computing 32:51 - Homunculus problem 35:34 - Self 37:40 - Dimensionality and thought 46:13 - Reductionism 47:38 - Working memory and capacity 1:01:45 - Capacity as a principle 1:05:44 - Silent synapses 1:10:16 - Subspaces in dynamics

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode