NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

00:00

Prefix Tuning: An Alternative to Fine Tuning

I expect prefixing to work well in a little data setting or in a extrapolation setting more over when the task requires a simple encoding but a more complex decoding as in the case of able to text. From a storage perspective we can imagine some interesting computing paradigm where maybe in the future the pre-trained model really gets too large to be stored or fine-tuned in a standard local compute. All the users will just query this language model for output values intermediate values or even gradient information via some API. It's like similar intuition to the GPT-3 API except that it's also giving gradient information some more intermediate results.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app