Machine Learning Street Talk (MLST) cover image

Subbarao Kambhampati - Do o1 models search?

Machine Learning Street Talk (MLST)

CHAPTER

Limitations of Language Models

This chapter explores the constraints of autoregressive large language models (LLMs) in retrieval and reasoning, highlighting their similarity to engram models rather than effective databases. Discussions include the 'chain of thought' methodology, which assesses their performance on computational tasks, and the importance of pre-training data in influencing outcomes. The conversation underscores the intricacies of reasoning in AI, emphasizing the need for deeper understanding and the challenges presented by model limitations.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner