
Language Understanding and LLMs with Christopher Manning - #686
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Caution in attributing reasoning ability to large language models
Large language models can behave in ways that mimic reasoning due to their vast knowledge but often make glaring mistakes indicating they are not reasoning but pattern matching. Current models struggle with planning problems, showing limitations in reasoning with constraints. Caution is advised in attributing reasoning abilities to large language models, although neural nets combined with search procedures excel in planning and evaluating positions in tasks like playing chess and go.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.