
The Real Eisman Playbook Gary Marcus on the Massive Problems Facing AI & LLM Scaling | The Real Eisman Playbook Episode 42
24 snips
Jan 19, 2026 Gary Marcus, a cognitive scientist and AI researcher, dives into the challenges facing AI today, particularly with large language models (LLMs). He critiques LLMs for their limitations, emphasizing diminishing returns in performance. Marcus discusses the phenomenon of AI hallucinations, where models confidently generate false information, and highlights the risks involved. He advocates for integrating symbolic components into AI systems to improve reliability and calls for a shift towards diverse foundational research in the AI community.
AI Snips
Chapters
Books
Transcript
Episode notes
LLMs Are Predictors, Not Thinkers
- Large language models (LLMs) are essentially advanced next-word predictors rather than true thinkers.
- Gary Marcus argues they excel at pattern recognition but lack the deliberative 'system two' reasoning humans use.
Harry Shearer Hallucination Example
- Gary Marcus recounts a hallucination where ChatGPT incorrectly stated Harry Shearer was British.
- The example shows LLMs can confidently present false biographical details despite easily verifiable facts.
Novelty And Cutoff Dates Break LLMs
- LLMs have a cutoff date in training data and struggle with novelty or new events.
- Band-aids like web search are poorly integrated and don't reliably fix real-time accuracy gaps.







