AI Breakdown cover image

arxiv preprint - Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models

AI Breakdown

00:00

Exploring the Impact of Input Length on Large Language Models

Exploring the inefficiency of perplexity metrics in measuring large language models' reasoning abilities over long texts and identifying failure modes like avoidance of answers, preference for specific types of responses, and failure to make logical connections.

Play episode from 01:31
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app