NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

00:00

How Prefix Tuning Compares With Adapter Tuning

GPT-3 relies on the other extreme GPT-3 and is in contact learning framework. We don't even need to do any training or parameter tuning will we apply in contact learning instead we just need to write down different prompts for different tasks so we don't need to save any parameters at all essentially. However in contact learning we'll introduce some other problems like first we couldn't exploit very large training sets because G PT-3 has a bonded lens context window that is it can only attend to a bonded number of tokens. The second disadvantage is that we have to manually write up the prompt for example but this manually written prompt may be sub-optimal. And the third disadvantage is

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app