Machine Learning Street Talk (MLST)

Prof. Subbarao Kambhampati - LLMs don't reason, they memorize (ICML2024 2/13)

84 snips
Jul 29, 2024
In this engaging discussion, Subbarao Kambhampati, a Professor at Arizona State University specializing in AI, tackles the limitations of large language models. He argues that these models primarily memorize rather than reason, raising questions about their reliability. Kambhampati explores the need for hybrid approaches that combine LLMs with external verification systems to ensure accuracy. He also delves into the distinctions between human reasoning and LLM capabilities, emphasizing the importance of critical skepticism in AI research.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Reasoning vs. Memorization

  • Reasoning is difficult to discern from memorization.
  • Subbarao Kambhampati highlights the challenge of distinguishing reasoned answers from retrieved ones.
ANECDOTE

Manhole Cover Anecdote

  • Subbarao Kambhampati recounts Microsoft's interview question about round manhole covers.
  • Initially, candidates reasoned, but now they likely memorize the common answer.
INSIGHT

LLMs as N-gram Models

  • Large Language Models (LLMs) are essentially advanced n-gram models.
  • They predict the next word based on a vast sequence of preceding words.
Get the Snipd Podcast app to discover more snips from this episode
Get the app