NLP Highlights cover image

98 - Analyzing Information Flow In Transformers, With Elena Voita

NLP Highlights

00:00

Transfornoron - A Study on Token Representation in Transformers

The study looked at how token representations in transformers evolved based on pretraning objectives. For example, if e tak untrained elistiums and use probin tasko predict identities of neighboring torcans, they perform better than trained bots. But for language modo, it goes up til to some lay and then goes down. It's not clear why this happens. So we tried to give a general explanation of the process behind such havior. And our pont of fewin this forces information about orneaso. Its method from 19 ninetis which tri to find aprecipresentation of infot,. Which contains as much as possible information about of

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app