
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI
Lex Fridman Podcast
LLM Reasoning Beyond Constant Computation
LLM performs primitive reasoning as the computation spent per token is constant, regardless of the complexity of the question or answer. In contrast, human reasoning allocates more time to complex problems, while LLM's computation remains fixed based on the number of tokens. This fixed computational approach limits LLM's ability to adjust effort based on question complexity, in contrast to human reasoning, which adapts based on the difficulty of the problem.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.