
The Gradient: Perspectives on AI
Subbarao Kambhampati: Planning, Reasoning, and Interpretability in the Age of LLMs
Feb 8, 2024
Subbarao Kambhampati, Professor of computer science at Arizona State University, discusses planning, reasoning, and interpretability in the age of LLMs. Topics include explanation in AI, thinking and language, scalability in planning, computational complexity in LLMs, and concerns about misinformation generated by LLMs.
01:59:03
Episode guests
AI Summary
Highlights
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- The reasoning ability of large language models is better described as approximate retrieval rather than robust thinking, highlighting the limitations of language models in planning and reasoning tasks.
- Understanding the role of explanations and mental modeling in AI systems is crucial for making their reasoning more understandable and interactive.
Deep dives
Large language models and reasoning capabilities
Many claims have been made about the capabilities of large language models to reason and plan. However, studies have shown that the reasoning ability of these models is better described as approximate retrieval, rather than robust thinking. Suburo Kanbambati's group has demonstrated similar results, highlighting the importance of understanding the limitations of language models in planning and reasoning tasks.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.