
Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova - #620
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
00:00
Exploring Limitations of Language Models
This chapter examines the inadequacies of current large language models (LLMs) in functional performance, advocating for a modular approach akin to human cognitive processes. It highlights issues in reasoning, arithmetic, context retention, and temporal understanding, and suggests that diverse training methods could enhance their capabilities.
Transcript
Play full episode