I'm kind of always looking for what is happening that maybe isn't being talked about a ton yet but seems like it has kind of transformative potential. I expect the next generation of contributions to the field will involve more higher level motifs in language models so you can find as many behaviors such as suppressing these like negativity heads in Gb2 are small generalized to training text. There was just this paper in the last few days at Microsoft research around retention they propose a somewhat different mechanism which I don't really understand yet and call it a possible successor to the transformer Seems like there's potential here for paths of and this may be an indication too that the history in its particulars could end up really matter

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode