The Gradient: Perspectives on AI cover image

Hattie Zhou: Lottery Tickets and Algorithmic Reasoning in LLMs

The Gradient: Perspectives on AI

00:00

In-Context Learning and Gradient Decading

There is a cool natural link there. Another series of papers that I found kind of neat recently has been the ones that are trying to establish this relationship between in-context learning and gradient descent. In context learning, it feels like, well, literally, you're not updating the weights of your pre-trained model anymore. But some of these works did this really interesting set of derivations to look at like a dual form between gradient descent and then linear self-attention. And they understand in context learning as it's kind of implicit fine-tuning where the transformer attention is like doing a meta optimization. Then in its forward pass, it basically is implementing gradient descent.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app