
Quillette Podcast How to Think Like a Human
Jan 2, 2026
David Weitzner, an Associate Professor of Management at York University and founder of the Artful Intelligence Institute, delves into the fascinating distinctions between human cognition and AI. He critiques the overhyped portrayals of AI shaped by sci-fi, asserting that large language models are merely probability calculators, not true intelligence. Weitzner urges a balanced understanding of AI's capabilities, emphasizing the pitfalls of misleading product labeling and the misleading claims of machine consciousness. He navigates the ongoing debate about AI's practical skills, pushing for a rational perspective amidst the hype.
AI Snips
Chapters
Transcript
Episode notes
Hype Exploits Sci‑Fi Expectations
- Big tech exploits sci‑fi priming to sell exaggerated AI narratives.
- Large language models are powerful probability calculators but not genuinely intelligent.
Treat LLMs As Tools, Not Oracles
- Do treat current LLMs as tools, not sentient replacements for human judgement.
- Demand realistic pricing and question claims before paying for AI services.
Architecture Limits True Intelligence
- Current LLM architecture (large language models) lacks the kind of competence that would make machines genuinely intelligent.
- Future breakthroughs might require different architectures like neurosymbolic approaches.
