Data Skeptic cover image

The Limits of NLP

Data Skeptic

00:00

The in Coder Only Architecture in Transfer Learning for Natural Language Processing

Our 11 billion perameter model, you can't actually fit on the single cloud tpu. You need to get a bigger slice of that accelerator that i mentioned earlier. We use the original in coder decota architecture. There are some minor differences in our transformer. For example, we use relative position an beddings which have been introduced into attention is all you need paper. But otherwise, our model is very similar to just the vanila transformer.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app