Cognitive Computing vs. LLMs (w/ Mahault Albarracin)
Apr 18, 2025
auto_awesome
Mahault Albarracin, a PhD student at UQAM and Director of Research Strategy at VERSUS, dives into the world of cognitive computing and AI. She reveals the critical distinctions between large language models and true cognitive agents, stressing the latter's need for self-awareness. The conversation also traverses her interdisciplinary journey from sexology to cognitive science, highlighting the philosophical implications of AI's development. Additionally, Mahault touches on engineering challenges, sustainability in AI, and the importance of a sociological perspective on sentience.
Large Language Models (LLMs) are limited to statistical pattern matching and lack true understanding or reasoning capabilities inherent in cognitive computing.
Active inference offers a framework for linking perception and action, emphasizing the dynamic nature of how agents navigate and adapt to their environments.
The interdisciplinary approach of combining cognitive computing with social sciences enriches our understanding of human interactions, ethical AI, and knowledge generation.
Deep dives
Limitations of LLMs in Understanding the World
Large Language Models (LLMs) are primarily statistical pattern matchers, fundamentally lacking an intrinsic understanding of the world. Unlike cognitive computing systems, which strive to create agents capable of learning and adapting via a model of themselves and their environment, LLMs function by generating responses based on probabilities derived from training data. This means LLMs do not engage in active reasoning or minimize uncertainty effectively, placing their utility in generating coherent text but not genuine understanding. As researchers increasingly recognize these limitations, there is a shift toward exploring more sophisticated forms of artificial intelligence that integrate higher cognitive functions.
Active Inference: A New Paradigm of Cognition
Active inference is presented as a framework for understanding sentient behavior and cognition, focusing on the idea that perception and action are interlinked processes. This concept suggests that perception is not simply passive reception but actively involves hypothesis testing about the state of the world. Active inference promotes the notion that agents engage in planning as part of how they navigate their environments, adapting their beliefs based on new information. The research into active inference asserts its applicability across various domains, from intelligent robotics to social systems, making it an exciting area of cognitive computing.
Interdisciplinary Research at the Intersection of AI and Social Sciences
The dialogue in the episode illustrates the ongoing exploration of combining cognitive computing with social science frameworks, aiming to enrich the understanding of human interactions and decision-making. Researchers such as Mahal Alborosan integrate disciplines like sociology, phenomenology, and physics into their study of cognitive systems to uncover deeper insights into how meaning is constructed and interpreted. The emphasis on epistemic communities underlines the collaborative nature of knowledge generation and the influence of cultural contexts on individual beliefs and decisions. This interdisciplinary approach holds great potential for advancing AI alignment and responsible technology.
Philosophy of Mind and its Role in Cognitive Science
The discussion delves into the relationship between philosophy of mind and cognitive science, challenging the dichotomy between the two fields. It posits that while cognitive science relies on empirical research, philosophy can complement it by probing more profound questions about consciousness and existence. The integration of phenomenological insights with cognitive frameworks, such as active inference, allows for a richer understanding of subjective experiences while still grounding them in scientific inquiry. By merging these domains, researchers can better address ethical concerns in AI and understand the implications of agentic systems.
Addressing AI Alignment Through Understanding Meaning
AI alignment poses a significant challenge as researchers seek to develop systems that resonate with human values and ethics. The exploration of how agents embody beliefs and meanings provides a pathway to addressing the often complex relationship between humans and AI. By embedding ethical considerations in the design of cognitive agents, it becomes possible to foster systems that align more closely with societal goals and promote cooperation rather than conflict. This proactive approach manifests in active inference, which encourages agents to integrate environmental feedback and user preferences into their decision-making processes.
Mahault Albarracin is a PhD student at UQAM, Montréal, Québec, researching cognitive computing and social sciences. Mahault is Director of Research Strategy and Product Integration at VERSUS, a cognitive computing company. She is also an IEEE fellow, author, lecturer and has an MA in sexology. In this episode, we discuss the fundamental difference between LLMs and agents built on cognitive computing principles, AI agency, active inference AI, free energy principle, computational phenomenology and neo-materialism.
Copyright Disclaimer under section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, education and research. Fair use is a use permitted by copyright statutes that might otherwise be infringing. If you are or represent the copyright owner of materials used in this video and have a problem with the use of the related material, please email me at trahulsam@gmail.com, and we can sort it out.
Thank you.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.