Data Skeptic cover image

The Limits of NLP

Data Skeptic

00:00

The Limits of Text to Text Transformations

The limits that we're exploring in this paper mostly pertain to the limits of how large we can make these models and how much data we can train them on. We trained a model that had around 11 billion perimeters on about 750 gigobites of text, or about a trillion tokens. The fact that this bench mark that was designed to be difficult for machines could be solved by a very similar algorithm is still somewhat surprising to me.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app