AI Breakdown

arxiv preprint - Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models

May 8, 2024
Researchers Mosh Levy, Alon Jacoby, and Yoav Goldberg discuss how input length affects reasoning performance of Large Language Models (LLMs), revealing a drop in performance at shorter lengths. They explore limitations in traditional perplexity metrics, suggesting room for further research to enhance LLM reasoning abilities.
Ask episode
Chapters
Transcript
Episode notes