Data Skeptic cover image

The Limits of NLP

Data Skeptic

00:00

Scaling Is Not the Most Satisfying Solution

It was about two orders of magnitudein in model size, which loosely speaking, represents how big we could make these models and still train them on existing hardware. You can find tune the model tto pretty good performance on a new task in, let's say, a few hours, on a single gob cloud tpu. And kolab gives you the t pu for free.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app