

The Great Chatbot Debate: Do LLMs Really Understand?
5 snips Apr 2, 2025
Emily Bender, a computational linguist at the University of Washington, argues that LLMs lack true understanding, underscoring the significance of meaning and context. In contrast, Sébastien Bubeck from OpenAI defends LLMs, pointing to their advancements in problem-solving and reasoning. They discuss the evolution of AI, the subjective nature of understanding, and the skepticism surrounding Artificial General Intelligence. Key explorations include the impact of LLMs on human interaction and the complex relationship between wealth, power, and technology.
AI Snips
Chapters
Transcript
Episode notes
Linguistics' Role
- Emily Bender emphasizes the importance of linguistics in understanding how language models work.
- Linguistics studies how language works and how humans use it, which is crucial for evaluating LLMs' understanding.
LLMs and Meaning
- Language models, even advanced ones, do not truly understand meaning.
- They manipulate the form of language without grasping the connection between words and the world.
T9's Limitations
- Older language models like T9 illustrate the limitations of statistical approaches to language.
- T9 prioritized frequent words, leading to frustrating errors like replacing "home" with "good" due to frequency differences.