Eye On A.I.

#302 Karl Friston: How the Free Energy Principle Could Rewrite AI

8 snips
Nov 19, 2025
Karl Friston, a leading neuroscientist and chief scientific officer at Verses, unveils his groundbreaking Free Energy Principle. He discusses how this principle could reshape AI by integrating active inference to create systems that learn like brains rather than merely predicting outcomes. Friston explains the importance of explicit uncertainty, which can significantly reduce errors in high-stakes situations. The conversation touches on Axiom, a new architecture that allows efficient learning and adapts through interaction, revolutionizing applications in logistics and robotics.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Free Energy As A Universal Objective

  • Variational free energy is an information-theoretic objective combining expected accuracy and an entropy (complexity) penalty.
  • Minimizing it yields efficient, non-overfitting models that mirror thermodynamic and Bayesian principles.
INSIGHT

Perception As Continuous Inference

  • The brain predicts sensory input and updates beliefs when prediction errors occur, doing rapid Bayesian belief updating.
  • This continuous, millisecond-scale inference explains perception as ongoing model revision, not just slow learning.
INSIGHT

Local Message Passing Beats Backpropagation

  • The brain uses local message passing (predictions down, errors up) rather than global backpropagation across layers.
  • Local predictive coding is biologically plausible and far more efficient than standard backpropagation for hierarchical models.
Get the Snipd Podcast app to discover more snips from this episode
Get the app