NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

00:00

Prefix Tuning Outperforms Fine Tuning

The main trend is that in table to text with 0.1% trainable parameters it could prefix tuning can like perform equally well being comparable or even outperforming fine tuning and adapter tuning. In the full data setting and in low data settings prefix tuning on average would also outperform fine tuning and then an extrapolation setting this is quite interesting we're trying to create some out of domain test distribution.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app