Data Skeptic cover image

The Limits of NLP

Data Skeptic

00:00

Using Attention Masks in a Language Model

When you have a model that uses attention one of the pieces that you can manipulate is this ye, what people call the attention mask. So for example, if i'm predicting a sequence and i want to predict the next token, but i don't want to allow the model to look at the future tokens, i should modify the attention mask so it only can look at tokens from the past. The reason this is important in our paper is a language model, which is just trained for next ep prediction, uses this causal attention mask where you don't allowed the model to attend to the future. But you can tweak that architecture a bit and basically allow themodel to attend fully, without any mask

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app