LessWrong (30+ Karma)

“Insofar As I Think LLMs ‘Don’t Really Understand Things’, What Do I Mean By That?” by johnswentworth

Nov 9, 2025
In this episode, John S. Wentworth, a software engineer and rationality writer, delves into the intriguing concept of understanding in large language models (LLMs). He uses the 'bag of map-pieces' analogy to illustrate how LLMs might lack a coherent understanding. John discusses their challenges with global consistency in reasoning and compares them to aphantasia—highlighting their absence of unified mental images. He speculates on whether larger models might achieve better consistency, raising questions about how they could notice and correct errors like humans do.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Trust The Phenomenology As A Probe

  • Notice the subjective feeling that LLMs lack something even if models of the cause are uncertain.
  • Use that phenomenological insight as a guide for probing and evaluating model behavior.
INSIGHT

Fragmented Map Mentality

  • LLMs resemble a bag of disconnected map-pieces rather than a single assembled map.
  • They can chain a few pieces but fail when information requires many joined pieces.
INSIGHT

Local Consistency, Global Incoherence

  • LLMs build local consistent domains but often fail at global consistency across domains.
  • This explains errors where symbols or assumptions change incompatibly across a proof.
Get the Snipd Podcast app to discover more snips from this episode
Get the app