Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Prof. Subbarao Kambhampati - LLMs don't reason, they memorize (ICML2024 2/13)

Jul 29, 2024
In this engaging discussion, Subbarao Kambhampati, a Professor at Arizona State University specializing in AI, tackles the limitations of large language models. He argues that these models primarily memorize rather than reason, raising questions about their reliability. Kambhampati explores the need for hybrid approaches that combine LLMs with external verification systems to ensure accuracy. He also delves into the distinctions between human reasoning and LLM capabilities, emphasizing the importance of critical skepticism in AI research.
01:42:27

Podcast summary created with Snipd AI

Quick takeaways

  • Prof. Kambhampati highlights that LLMs excel in language generation but fundamentally lack genuine reasoning capabilities and logical verification.
  • The limitations of LLMs are evident in their reliance on memorization instead of independent reasoning, particularly in complex tasks like planning.

Deep dives

The Nature of Reasoning and Answering

When faced with a reasoning question, it can be difficult to distinguish between answering from memory and genuine reasoning. An example is the question of why manhole covers are round, where the initial answer required understanding that other shapes could fall through the hole, illustrating first principles reasoning. Now, many people might respond based on research or prior knowledge, which only reflects preparation rather than reasoning ability. This highlights the uniqueness of human reasoning compared to large language models (LLMs), which excel in language generation but lack true reasoning capabilities.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner