
The Illusion of Thinking: What the Apple AI Paper Says About LLM Reasoning
Deep Papers
00:00
Unraveling the Illusion of Reasoning in Models
This chapter explores the nuances of large reasoning models (LRMs) and their distinction from large language models (LLMs), questioning whether the reasoning process they exhibit is genuine or an illusion. It advocates for a deeper understanding of their functioning and the intricacies of pre-training and fine-tuning, illustrated by practical examples.
Transcript
Play full episode