AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Tokenization of the Language Model
The main architecture is exactly what you would expect. There are some nifty new things in there so it uses like LMI for like how to do the token embeddings and things like that. It's a basic text to text autoregressive model same architecture as your typical big text to text models yeah basically. I think this is a really neat part of NLP just very much I like the tools you use kind of talk but let's just like take a moment tell us what is a tokenWhat is a tokenizer and then like how did you do it differently with this big Bloom model?