Marketplace All-in-One

A case for AI models that understand, not just predict, the way the world works

Dec 15, 2025
Gary Marcus, a cognitive scientist and author known for his critical views on deep learning, shares his insights on the importance of 'world models' in AI. He explains how these models offer systematic representations of reality, unlike large language models that predict text probabilistically. Marcus discusses the resurgence of interest in world models due to the limits of scaling LLMs, emphasizing their role in robotics and gaming. Ultimately, he argues that understanding the world is crucial for achieving artificial general intelligence.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

What A World Model Is

  • Gary Marcus defines a world model as an internal representation of people, places, objects and their possibilities.
  • He says humans use such models to predict others' thoughts and environmental affordances in real time.
INSIGHT

Why LLMs Hallucinate

  • Marcus contrasts LLMs' statistical text prediction with systems that store structured facts or scene information.
  • He links LLM hallucinations to their lack of systematic internal databases of entities and facts.
INSIGHT

Robots And Games Use Real World Models

  • Marcus notes robotics and video games already rely on explicit world models like scene graphs listing entities and relations.
  • He finds LLMs unusual for lacking that front-and-center representation of objects and states.
Get the Snipd Podcast app to discover more snips from this episode
Get the app