AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Train a Code Model and Language Model
Training code and language models is actually very interesting because it's only been sort of recently realized in a widespread way that including code improves the language modeling capabilities of non-code models. So there's really this open question right now that we're very interested in, which is how does the mixture of data in the training set of a model affect its capabilities? It becomes quite difficult to actually tease this out because the mechanisms of these models are unknown. But some of the work that we've done here is this idea of causal intervention where you take a training set, train a model on all of it, and then look at how their representations differ.