AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Using Attention Masks in a Language Model
When you have a model that uses attention one of the pieces that you can manipulate is this ye, what people call the attention mask. So for example, if i'm predicting a sequence and i want to predict the next token, but i don't want to allow the model to look at the future tokens, i should modify the attention mask so it only can look at tokens from the past. The reason this is important in our paper is a language model, which is just trained for next ep prediction, uses this causal attention mask where you don't allowed the model to attend to the future. But you can tweak that architecture a bit and basically allow themodel to attend fully, without any mask