NLP Highlights cover image

130 - Linking human cognitive patterns to NLP Models, with Lisa Beinborn

NLP Highlights

00:00

Is There a Way to Make It Work?

First words are more important, independent of what's coming after. The problem is that if we go from left to right, then when we mask a token, we can only look at the previous token,. And then if we like ever age over, accumulate over everything, then we automatically have a higher importance of the first tokens in a sentence. Not even if they contribute very little to each, to the contribution of each token, they involve so many times that it ends up high. So i will just say our set up doesn't generalize to this case. I think you can still do it, but you have to find a way to account a bit for this this ordering effect.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app