NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

CHAPTER

Prefix Tuning Outperforms Fine Tuning

The main trend is that in table to text with 0.1% trainable parameters it could prefix tuning can like perform equally well being comparable or even outperforming fine tuning and adapter tuning. In the full data setting and in low data settings prefix tuning on average would also outperform fine tuning and then an extrapolation setting this is quite interesting we're trying to create some out of domain test distribution.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner