AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Limits of Language Models
Is there a theoretical limit to the side i mean is there a law of diminishing returns i assume there would be like how large can the language models get if you just continue to just throw more and more at it does it just get better and better or is eventually just top out. I want the smallest possible language model that i can run on my own device that can still do the magic it can summarize things and extract facts and generate bits of code and all of that sort of stuff. The limitation right now is more the expensive running themlike if you made a gpt five that was ten times the size of gpt four and cost ten times as much much to rundoes that actually