Machine Learning Street Talk (MLST) cover image

Subbarao Kambhampati - Do o1 models search?

Machine Learning Street Talk (MLST)

00:00

Limitations of Language Models

This chapter explores the constraints of autoregressive large language models (LLMs) in retrieval and reasoning, highlighting their similarity to engram models rather than effective databases. Discussions include the 'chain of thought' methodology, which assesses their performance on computational tasks, and the importance of pre-training data in influencing outcomes. The conversation underscores the intricacies of reasoning in AI, emphasizing the need for deeper understanding and the challenges presented by model limitations.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app