The researchers used the code even that was used to train Bloom, which is an open source trained kind of language model. They make a 50 billion parameter version of that and they train it on this mixed data set. And what they seem to find exciting about this is that this model manages to perform really well in general purpose reasoning without losing its finance kind of edge.