Machine Learning Street Talk (MLST) cover image

Prof. Subbarao Kambhampati - LLMs don't reason, they memorize (ICML2024 2/13)

Machine Learning Street Talk (MLST)

00:00

The Limits of Language Models in Reasoning

This chapter explores the differences between formal and natural languages, emphasizing the flexibility of natural expression compared to the constraints of formal languages. It critiques the reasoning capabilities of large language models (LLMs), contrasting human reasoning based on first principles with LLMs' reliance on memorized responses. Through examples and discussions of transitive reasoning, the chapter highlights the inherent flaws in LLMs' ability to genuinely reason and addresses the misconceptions surrounding their capabilities.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app