AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Limits of Text to Text Transformations
The limits that we're exploring in this paper mostly pertain to the limits of how large we can make these models and how much data we can train them on. We trained a model that had around 11 billion perimeters on about 750 gigobites of text, or about a trillion tokens. The fact that this bench mark that was designed to be difficult for machines could be solved by a very similar algorithm is still somewhat surprising to me.