RSam Podcast cover image

RSam Podcast

Cognitive Computing vs. LLMs (w/ Mahault Albarracin)

Apr 18, 2025
Mahault Albarracin, a PhD student at UQAM and Director of Research Strategy at VERSUS, dives into the world of cognitive computing and AI. She reveals the critical distinctions between large language models and true cognitive agents, stressing the latter's need for self-awareness. The conversation also traverses her interdisciplinary journey from sexology to cognitive science, highlighting the philosophical implications of AI's development. Additionally, Mahault touches on engineering challenges, sustainability in AI, and the importance of a sociological perspective on sentience.
01:21:11

Podcast summary created with Snipd AI

Quick takeaways

  • Large Language Models (LLMs) are limited to statistical pattern matching and lack true understanding or reasoning capabilities inherent in cognitive computing.
  • Active inference offers a framework for linking perception and action, emphasizing the dynamic nature of how agents navigate and adapt to their environments.

Deep dives

Limitations of LLMs in Understanding the World

Large Language Models (LLMs) are primarily statistical pattern matchers, fundamentally lacking an intrinsic understanding of the world. Unlike cognitive computing systems, which strive to create agents capable of learning and adapting via a model of themselves and their environment, LLMs function by generating responses based on probabilities derived from training data. This means LLMs do not engage in active reasoning or minimize uncertainty effectively, placing their utility in generating coherent text but not genuine understanding. As researchers increasingly recognize these limitations, there is a shift toward exploring more sophisticated forms of artificial intelligence that integrate higher cognitive functions.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner