
126 - Optimizing Continuous Prompts for Generation, with Lisa Li
NLP Highlights
Prefix Tuning on Encoder With Decoder
In prefix tuning because of the word prefix we are actually putting the trainable parameter in front of at the very beginning like in front of the input and output accent y. The takeaway from this ablation is that in-fix tuning slightly under preform prefix tuning leads to a drop in expressiveness. Inveting only can't be sufficiently expressive for a generation task as it suffers a relatively large performance drop from pervious prefix, according to our results.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.