AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The podcast episode discusses the furniture principle, which is about neuroscience and variational free energy as an ongoing process. It then introduces the action principle as a broader concept that can be applied to self-organizing systems and information processing in biological and other self-organizing systems. The action principle is seen as a new branch of Lagrangian mechanics that has a solid footing and can affect various fields like AI, engineering, and science.
The principle of least action is introduced as a framework in physics that explains trajectories of systems. It asserts that the actual path taken by a system minimizes the action integral, which is derived from the difference between kinetic and potential energy. The principle of least action is seen as an optimization process that resonates in various scientific fields. Karl Friston's free-energy principle is mentioned as a mirror of optimality in cognitive science, where biological systems minimize free energy to regulate and predict environmental exchanges.
The podcast explores the concept of variational energy in the context of the free-energy principle and how it relates to the brain's processing of sensory information. Variational energy represents the difference between the brain's predictions and actual sensory input, quantifying prediction errors. Minimizing prediction errors involves tweaking internal models to better match sensory information, similar to a sculptor chiseling away at marble to reveal a clearer picture. Variational methods, including variational inference, provide approximate solutions to complex inference problems, allowing structured inferences about the world.
The podcast discusses the importance of message passing and autonomy in active inference. Message passing allows for efficient inference in complex models by distributing computations and updating beliefs. Autonomy is crucial for active inference agents to adapt and respond to dynamic and uncertain environments. The ability to interrupt ongoing processes and adjust the message passing schedule in real time enables agents to handle interruptions, respond to new information, and maintain flexibility. These concepts align with the situated hypothesis and support the idea of reactive programming and information processing.
Active inference is a computational approach that allows for distributed and interruptible processing. It involves performing free energy minimization in a distributed manner, where computations are diffused and can be cut short or interrupted at any point. This enables robustness and adaptability in situations where computational resources fluctuate or when there are changes in the environment. Active inference agents focus on the most important information and make predictions based on that, which leads to efficient processing and effective decision-making.
The diffusion of processing and progressive computation are key aspects of active inference. Information and processing are diffused across the system, where each node constantly makes predictions and receives sensory inputs, which are then processed and updated. By diffusing processing in this way, active inference agents can adapt to changing conditions and optimize their actions. The system is robust and resilient because it doesn't rely on a specific set of connections or a fixed set of computations. Instead, it dynamically adjusts to the available resources and can be interrupted or scaled down as needed.
Active inference can be seen as an automated engineering process. Agents built using active inference are purpose-driven and focus on solving real-world problems. Instead of programming explicit algorithms, active inference agents use a generative model and perform free energy minimization to make predictions and optimize actions. The goal is to create agents that are robust, adaptable, and capable of performing well in situations with limited computational resources or changing requirements. The potential applications of active inference include developing agents that can operate efficiently in varied environments and tasks, from self-driving cars to other real-world scenarios.
The podcast episode discusses the importance of building robust agents with nested generative models. The speaker highlights the need to hand craft the entire stack, setting goals and biases, and creating an agent-based framework. They emphasize that the natural world's robustness is due to the nested nature of its agents and propose that artificial systems should adopt similar hierarchies. The episode also mentions an approach using factor graphs and Gaussian processes within nodes to add variance while maintaining the appearance of a regular node. The speaker envisions a toolbox for variational free energy minimization in probabilistic models, automating inference and democratizing active inference for a broad range of applications.
The podcast episode delves into the evolution of probabilistic models and the challenges of implementation in active inference. The speaker recounts their transition to adopting a Bayesian approach inspired by influential texts and pioneers in the field. They highlight the need to shift focus from specific algorithm design to more generic generative model design to address the limitations of deep learning and provide adaptability across different domains. The speaker emphasizes the desire to create a toolbox that automates inference, allowing engineers to focus on generative model design rather than deriving specific algorithms. They also contemplate the resistance and difficulties faced in attracting students and obtaining funding for active inference projects, while considering the potential impact of active inference in a future of embedded devices and distributed systems.
Watch behind the scenes with Bert on Patreon: https://www.patreon.com/posts/bert-de-vries-93230722 https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk
Note, there is some mild background music on chapter 1 (Least Action), 3 (Friston) and 5 (Variational Methods) - please skip ahead if annoying. It's a tiny fraction of the overall podcast.
YT version: https://youtu.be/2wnJ6E6rQsU
Bert de Vries is Professor in the Signal Processing Systems group at Eindhoven University. His research focuses on the development of intelligent autonomous agents that learn from in-situ interactions with their environment. His research draws inspiration from diverse fields including computational neuroscience, Bayesian machine learning, Active Inference and signal processing. Bert believes that development of signal processing systems will in the future be largely automated by autonomously operating agents that learn purposeful from situated environmental interactions. Bert received nis M.Sc. (1986) and Ph.D. (1991) degrees in Electrical Engineering from Eindhoven University of Technology (TU/e) and the University of Florida, respectively. From 1992 to 1999, he worked as a research scientist at Sarnoff Research Center in Princeton (NJ, USA). Since 1999, he has been employed in the hearing aids industry, both in engineering and managerial positions. De Vries was appointed part-time professor in the Signal Processing Systems Group at TU/e in 2012. Contact: https://twitter.com/bertdv0 https://www.tue.nl/en/research/researchers/bert-de-vries https://www.verses.ai/about-us Panel: Dr. Tim Scarfe / Dr. Keith Duggar TOC: [00:00:00] Principle of Least Action [00:05:10] Patreon Teaser [00:05:46] On Friston [00:07:34] Capm Peterson (VERSES) [00:08:20] Variational Methods [00:16:13] Dan Mapes (VERSES) [00:17:12] Engineering with Active Inference [00:20:23] Jason Fox (VERSES) [00:20:51] Riddhi Jain Pitliya [00:21:49] Hearing Aids as Adaptive Agents [00:33:38] Steven Swanson (VERSES) [00:35:46] Main Interview Kick Off, Engineering and Active Inference [00:43:35] Actor / Streaming / Message Passing [00:56:21] Do Agents Lose Flexibility with Maturity? [01:00:50] Language Compression [01:04:37] Marginalisation to Abstraction [01:12:45] Online Structural Learning [01:18:40] Efficiency in Active Inference [01:26:25] SEs become Neuroscientists [01:35:11] Building an Automated Engineer [01:38:58] Robustness and Design vs Grow [01:42:38] RXInfer [01:51:12] Resistance to Active Inference? [01:57:39] Diffusion of Responsibility in a System [02:10:33] Chauvinism in "Understanding" [02:20:08] On Becoming a Bayesian Refs: RXInfer https://biaslab.github.io/rxinfer-website/ Prof. Ariel Caticha https://www.albany.edu/physics/faculty/ariel-caticha Pattern recognition and machine learning (Bishop) https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf Data Analysis: A Bayesian Tutorial (Sivia) https://www.amazon.co.uk/Data-Analysis-Bayesian-Devinderjit-Sivia/dp/0198568320 Probability Theory: The Logic of Science (E. T. Jaynes) https://www.amazon.co.uk/Probability-Theory-Principles-Elementary-Applications/dp/0521592712/ #activeinference #artificialintelligence
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode