NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

CHAPTER

How Prefix Tuning Compares With Adapter Tuning

GPT-3 relies on the other extreme GPT-3 and is in contact learning framework. We don't even need to do any training or parameter tuning will we apply in contact learning instead we just need to write down different prompts for different tasks so we don't need to save any parameters at all essentially. However in contact learning we'll introduce some other problems like first we couldn't exploit very large training sets because G PT-3 has a bonded lens context window that is it can only attend to a bonded number of tokens. The second disadvantage is that we have to manually write up the prompt for example but this manually written prompt may be sub-optimal. And the third disadvantage is

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner