
126 - Optimizing Continuous Prompts for Generation, with Lisa Li
NLP Highlights
The Differences Between Prefix Tuning and Adapter Tuning
The takeaway is that prefix tuning kind of extrapolates to all these three data sets and work well with a light range of domains perhaps you have already mentioned it can you reiterate why prefix tuning with the same number of parameters as the adapter is performing a little bit better. We think this gain in parameter efficiency is because prefixing keeps the pre-trained language model intact as much as possible more so than adapter tuning. So we don't really have the detailed comparison or the theoretical analysis of how this different inductive or this different architecture design leads to different performance Daddy thank you for your time.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.