
Marketplace Tech A case for AI models that understand, not just predict, the way the world works
4 snips
Dec 15, 2025 Gary Marcus, a cognitive scientist and professor emeritus at NYU, dives into the intriguing world of AI. He distinguishes between large language models (LLMs) and 'world models,' emphasizing the latter's role in achieving artificial general intelligence. Marcus explains how world models benefit robotics and games through structured representations, which enable better action and planning. He discusses the resurgence of interest in these models as a means to push beyond the limitations of LLMs, advocating for models that understand causality rather than just predict outcomes.
AI Snips
Chapters
Books
Transcript
Episode notes
LLMs Lack Structured World Representations
- Large language models predict word sequences statistically but lack structured internal representations of people, places, and events.
- Gary Marcus says this gap causes hallucinations because LLMs don't store facts like a database.
World Models Are Core In Robotics And Games
- Robotics and video games have long used explicit world models to represent entities, locations, and capabilities.
- Marcus argues robots need those models to reason about surfaces, strengths, and connections in the physical world.
Scaling Alone Has Hit Diminishing Returns
- Scaling data and compute for LLMs has yielded diminishing returns and smaller incremental gains since 2023.
- That slowdown has opened interest in alternative approaches like video or learned world models.




