The Joy of Why

Will AI Ever Understand Language Like Humans?

45 snips
May 1, 2025
In this engaging discussion, guest Ellie Pavlick, a computer scientist and linguist at Brown University, delves into the fascinating world of large language models (LLMs). She explores the gap between LLM language processing and human cognition, questioning what it means to 'understand' language. Pavlick highlights how LLMs learn differently than humans and the implications for creativity and knowledge. The conversation examines the intersection of AI, art, and the philosophical questions that arise in this rapidly evolving field.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Black Box Nature of LLMs

  • Large language models are black boxes because they learn from data patterns, not just code.
  • We know the recipe for making them but not the exact reasons for their specific outputs.
INSIGHT

Understanding AI and Humans Differ

  • LLMs do not truly "understand" language as humans do.
  • The ideas of knowing and understanding are vague and need precision to interpret AI capabilities.
INSIGHT

Computational Basis of Intelligence

  • Intelligence is likely computational, so being a digital computer doesn't exclude thinking.
  • The argument that computers can't think because they're digital is philosophically weak.
Get the Snipd Podcast app to discover more snips from this episode
Get the app