
Eric Michaud on scaling, grokking and quantum interpretability
The Inside View
00:00
How to Cluster a Tokenizer
The data set that the models were trained on has over 100 million tokens. The clustering you did, it's like automatic, right? So you didn't like force the models like you didn't force a cluster on like the new line thing. You just made this cluster automatically. And then you like look at the different clusters. How many clusters did you find like in total? Approximately.
Transcript
Play full episode