Machine Learning Street Talk (MLST) cover image

Prof. Subbarao Kambhampati - LLMs don't reason, they memorize (ICML2024 2/13)

Machine Learning Street Talk (MLST)

00:00

Exploring the Limits of Reasoning in LLMs through Cipher Decoding

This chapter examines the shortcomings of large language models like GPT-4, highlighting their tendency to memorize rather than reason. Through the example of ciphertext decoding, it underscores the need for critical skepticism and thorough testing to accurately assess these models' true capabilities.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app