AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The in Coder Only Architecture in Transfer Learning for Natural Language Processing
Our 11 billion perameter model, you can't actually fit on the single cloud tpu. You need to get a bigger slice of that accelerator that i mentioned earlier. We use the original in coder decota architecture. There are some minor differences in our transformer. For example, we use relative position an beddings which have been introduced into attention is all you need paper. But otherwise, our model is very similar to just the vanila transformer.