NLP Highlights cover image

126 - Optimizing Continuous Prompts for Generation, with Lisa Li

NLP Highlights

00:00

Table to Text and Summarization Tasks

Low review: Can you tell us about the experiments and actual tasks and baselines you experimented with. Experiment was to experiment with two tasks one is table to text and the other is summarization. The baseline that we considered include full fine tuning fine tuning just the top k layers of the language model and adaptor tuning with different parameters levels. We tried three percent which is equal to the common setting in adaptor tuning and we tried 0.1 percent just to have a fair comparison to prefix tuning. Different task settings including the full data setting the real data setting and the extrapolation setting yeah thanks thanks low review.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app