BI 184 Peter Stratton: Synthesize Neural Principles
Feb 20, 2024
auto_awesome
The podcast discusses synthesizing neural principles for better AI, focusing on a 'sideways-in' approach for computational brains. It explores integrating diverse brain operations, the challenges in achieving general-purpose AI, advancements in robotics inspired by biological principles, and the complexities of spiking neural networks for artificial general intelligence.
Synthesizing different brain operations can lead to better AI models by integrating principles like sparse spike time coding and self-organization.
Implementing a sideways-in approach in modeling brains and AI emphasizes simulating emergent properties and leveraging neural computation principles.
Studying simpler organisms and their brain-body-environment interactions can offer insights into developing robots with competencies of biological organisms.
Deep dives
The Importance of Neural Principles in Building AI
This podcast episode explores the significance of understanding biological neural computation in contrast to standard artificial neural networks. The guest emphasizes the need to combine and synthesize various principles of computation used by the brain, including sparse spike time coding, self-organization, short-term plasticity, reward learning, homeostasis, feedback predictive circuits, conduction delays, oscillations, innate dynamics, stochastic sampling, multi-scale inhibition, K-winner take all, and embodied coupling. By integrating these principles, it is believed that better AI can be developed. The episode also highlights the complex challenges in synthesizing and combining these principles into coherent functioning systems, and the importance of emergent properties in achieving better AI models.
The Sideways-In Approach to Modeling Brain and AI
The podcast discusses the concept of a sideways-in approach to modeling brains and AI. This approach focuses on incorporating principles of neural computation and simulating emergent properties, rather than trying to replicate the exact biological processes bottom-up. By abstracting functional goals and leveraging principles such as homeostasis, oscillations, and spiking networks, AI models can find happy operating points and exhibit robust behavior. This approach allows researchers to understand and leverage the crucial interplay between different neural principles in building more effective AI systems.
The Embodied Turing Test and Understanding Intelligence
The podcast explores the embodied Turing test in the context of understanding intelligence. By studying simpler organisms and how they survive in their environments, researchers can identify fundamental principles of neural computation. The goal is to build robots that can replicate the competencies of biological organisms, where the brain, body, and environment interact in closed loops. This approach recognizes the importance of the brain-body-environment interaction and emphasizes the need for a deeper understanding of how the brain's emergent properties arise from this interaction.
Challenges and Limitations in Understanding the Brain and AI
The podcast acknowledges the challenges and limitations in understanding the brain and developing AI models. While principles of neural computation can inform the design of AI systems, capturing the complexity of the brain remains an ongoing task. The podcast emphasizes the need for continuous exploration, experimentation, and synthesis of principles to develop more accurate models. It also highlights the impact of rapidly changing environments, technological advancements, and societal factors on mental health and psychological well-being, underscoring the importance of studying the brain in both its biological and environmental context.
Building spiking neural networks and their potential for AGI
The podcast episode discusses the potential of spiking neural networks in achieving artificial general intelligence (AGI). The guest speaker emphasizes that current deep learning models, while successful in specific tasks, are limited in terms of their dynamics and interaction with the environment. Spiking neural networks, on the other hand, exhibit transient and dynamic behavior, making them more suitable for real-world applications. The guest researcher highlights the challenge of scaling up spiking networks and the need for further research to understand their principles and capabilities. The ultimate goal is to develop spiking networks that can rival the performance of gradient descent models in complex problems, leading to more capable and useful AI systems.
The nature of consciousness and its potential in machine systems
The episode also touches on the topic of consciousness in machine systems. The guest speaker suggests that consciousness is not unique to the human brain and that it can potentially emerge in machine systems that compute similar to the brain, even if built with different materials like silicon. The speaker argues that consciousness is a product of physical processes and does not require any magical or mystical component. While acknowledging that the exact point at which a machine system becomes conscious is uncertain, the speaker expresses the belief that as AI progresses towards more capable models, consciousness is likely to emerge in these systems.
Support the show to get full episodes, full archive, and join the Discord community.
Peter Stratton is a research scientist at Queensland University of Technology.
I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman.
What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right?
0:00 - Intro
3:50 - AI background, neuroscience principles
8:00 - Overall view of modern AI
14:14 - Moravec's paradox and robotics
20:50 -Understanding movement to understand cognition
30:01 - How close are we to understanding brains/minds?
32:17 - Pete's goal
34:43 - Principles from neuroscience to build AI
42:39 - Levels of abstraction and implementation
49:57 - Mental disorders and robustness
55:58 - Function vs. implementation
1:04:04 - Spiking networks
1:07:57 - The roadmap
1:19:10 - AGI
1:23:48 - The terms AGI and AI
1:26:12 - Consciousness
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode