NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

00:00

How Prefix Tuning Compares to Lightweight Fine-Tuning

In terms of doing or making fine-tuning more lightweight I'm aware of a couple of other techniques for doing that as well. How exactly does prefix tuning compare with some of those other methods? Yeah sure. adapter tuning would lead to a 30 time fewer trainable parameters thanfine-tuning while maintaining a comparable performance whereas prefixing has a more drastic reduction. Prefecting leads to a thousand times fewer parameters than fine-tuned while still maintain a comparable performance so it is much more star deficient in that perspective.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app