BI 208 Gabriele Scheler: From Verbal Thought to Neuron Computation
Mar 26, 2025
auto_awesome
Gabriele Scheler, a computational neuroscientist and co-founder of the Carl Correns Foundation for Mathematical Biology, delves into groundbreaking neuron models and their applications in AI. She argues that traditional neuron models have stagnated, advocating for approaches that account for internal cellular computations, which could revolutionize AI development. Gabriele also explores the interplay between language and thought, critiques the limitations of current AI, and reflects on how our internal verbal dialogues shape cognition, emphasizing the need for a nuanced understanding in neuroscience.
Gabriele Scheler's new neuron model emphasizes internal calculations within cells, potentially revolutionizing artificial intelligence by creating smarter neural networks.
The podcast highlights the complexity of human cognition, illustrating that internal verbal monologues are integral to thought organization and problem-solving.
Concerns are raised about current AI models' simplistic approaches, suggesting a need to consider the rich internal dynamics of neurons for better cognitive replication.
Deep dives
The Role of Language in Cognitive Advancement
The podcast discusses the significant increase in cortical capacity in humans, suggesting that language is a key factor in this phenomenon. Current neuroscience struggles to fully understand how neurons collaborate to produce grammatical sentences, highlighting the complexity of human cognition. This complexity raises the argument that the mere replication of neuronal structures in machines does not equate to genuine human-like intelligence. The speaker emphasizes that merely mimicking neuronal activity does not capture the essence of human thought, indicating a gap in our ability to replicate cognitive processes in artificial systems.
Innovative Neuron Modeling and AI Implications
Attention is drawn to a new neuron model that aims to simplify artificial intelligence by incorporating a deeper understanding of internal neuron processes. This model respects not only the external spiking activity but also the computations occurring within the cell’s membrane and nucleus. It represents a departure from traditional neuron models that have remained largely unchanged for years. The expectation is that this model will lead to smarter neural networks, enabling more efficient AI by leveraging compact representations of neuronal behavior.
The Importance of Internal Monologue in Thought Processes
The podcast highlights the connection between internal verbal monologues and cognitive processes, positing that language plays a crucial role in organizing thoughts. The discussion reveals that while some people visualize thoughts, verbal thought is central to human cognition for many. This internal speech aids in problem-solving and enhances memory, emphasizing that language is not merely an additional tool but an integral component of our mental experience. The complexities of how we think underscore the potential shortcomings of AI systems that lack this nuanced understanding.
Challenges in Contemporary Neuroscience and AI
Concerns are raised about the conventional approach to AI and neuroscience, which often overlooks the complexity of biological processes within neurons. The dialogue stresses that many current artificial intelligence models are overly simplistic, relying primarily on spiking activities while neglecting the rich internal dynamics that contribute to cognition. This simplification leads to models that may not capture the intricate functions of real neurons, diminishing their effectiveness in mimicking human-like intelligence. The speaker advocates for a re-evaluation of how we build and interpret models in both neuroscience and AI fields.
Setting Up the Carl Korins Foundation
The conversation shifts to the establishment of the Carl Korins Foundation, created to support research and development in neuroscience. The foundation aims to foster innovative projects, particularly those focusing on understanding cognitive processes through advanced computational models. It offers a platform for early-career scientists to pursue novel research that aligns with the foundation’s objectives of exploring misunderstood areas in neuroscience. This initiative reflects a broader desire to ensure that foundational scientific inquiry remains vibrant and supported in an increasingly competitive academic environment.
Support the show to get full episodes, full archive, and join the Discord community.
Gabriele Scheler co-founded the Carl Correns Foundation for Mathematical Biology. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons.
We discuss her theoretical work building a new kind of single neuron model. She, like Dmitri Chklovskii a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like Randy Gallistel, David Glanzman, and Hessam Akhlaghpour, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriele also believes the new neuron model she's developing will improve AI, drastically simplifying the models by providing them with smarter neurons, essentially.
We also discuss the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue, her lifelong interest in language in general, what she thinks about LLMs, why she decided to start her own foundation to fund her science, what that experience has been like so far. Gabriele has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience.
0:00 - Intro
4:41 - Gabriele's early interests in verbal thinking
14:14 - What is thinking?
24:04 - Starting one's own foundation
58:18 - Building a new single neuron model
1:19:25 - The right level of abstraction
1:25:00 - How a new neuron would change AI
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode