NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

CHAPTER

Prefix Tuning: An Alternative to Fine Tuning

I expect prefixing to work well in a little data setting or in a extrapolation setting more over when the task requires a simple encoding but a more complex decoding as in the case of able to text. From a storage perspective we can imagine some interesting computing paradigm where maybe in the future the pre-trained model really gets too large to be stored or fine-tuned in a standard local compute. All the users will just query this language model for output values intermediate values or even gradient information via some API. It's like similar intuition to the GPT-3 API except that it's also giving gradient information some more intermediate results.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner