NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

CHAPTER

How Prefix Tuning Compares to Lightweight Fine-Tuning

In terms of doing or making fine-tuning more lightweight I'm aware of a couple of other techniques for doing that as well. How exactly does prefix tuning compare with some of those other methods? Yeah sure. adapter tuning would lead to a 30 time fewer trainable parameters thanfine-tuning while maintaining a comparable performance whereas prefixing has a more drastic reduction. Prefecting leads to a thousand times fewer parameters than fine-tuned while still maintain a comparable performance so it is much more star deficient in that perspective.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner