AXRP - the AI X-risk Research Podcast cover image

AXRP - the AI X-risk Research Podcast

32 - Understanding Agency with Jan Kulveit

May 30, 2024
Jan Kulveit, who leads the Alignment of Complex Systems research group, dives into the fascinating intersection of AI and human cognition. He discusses active inference, the differences between large language models and the human brain, and how feedback loops influence behavior. The conversation explores hierarchical agency, the complexities of aligning AI with human values, and the philosophical implications of self-awareness in AI. Kulveit also critiques existing frameworks for understanding agency, shedding light on the dynamics of collective behaviors.
02:22:29

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Jan Kulveit explains the concept of active inference, highlighting how it contrasts with traditional cognitive models in understanding human cognition.
  • The discussion explores the limitations of large language models in lacking feedback loops essential for learning and adaptation as per active inference theory.

Deep dives

Active Inference and Large Language Models

The discussion centers on a paper that compares large language models (LLMs) to active inference, an approach originating from neuroscience. The authors propose that LLMs might be seen as special cases of active inference systems, suggesting they operate similarly in predicting sensory inputs. They explore how these models lack the feedback loop present in active inference, which is crucial for learning and adaptation. This observation raises questions about the implications of LLMs functioning without a tightly closed feedback loop and how that might shape their responses and interactions.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner