Machine Learning Street Talk (MLST) cover image

Prof. Subbarao Kambhampati - LLMs don't reason, they memorize (ICML2024 2/13)

Machine Learning Street Talk (MLST)

CHAPTER

Exploring the Limits of Reasoning in LLMs through Cipher Decoding

This chapter examines the shortcomings of large language models like GPT-4, highlighting their tendency to memorize rather than reason. Through the example of ciphertext decoding, it underscores the need for critical skepticism and thorough testing to accurately assess these models' true capabilities.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner