Deep Papers cover image

The Illusion of Thinking: What the Apple AI Paper Says About LLM Reasoning

Deep Papers

00:00

Unraveling the Illusion of Reasoning in Models

This chapter explores the nuances of large reasoning models (LRMs) and their distinction from large language models (LLMs), questioning whether the reasoning process they exhibit is genuine or an illusion. It advocates for a deeper understanding of their functioning and the intricacies of pre-training and fine-tuning, illustrated by practical examples.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app