NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

CHAPTER

Table to Text and Summarization Tasks

Low review: Can you tell us about the experiments and actual tasks and baselines you experimented with. Experiment was to experiment with two tasks one is table to text and the other is summarization. The baseline that we considered include full fine tuning fine tuning just the top k layers of the language model and adaptor tuning with different parameters levels. We tried three percent which is equal to the common setting in adaptor tuning and we tried 0.1 percent just to have a fair comparison to prefix tuning. Different task settings including the full data setting the real data setting and the extrapolation setting yeah thanks thanks low review.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner