
126 - Optimizing Continuous Prompts for Generation, with Lisa Li
NLP Highlights
The General Trends in the Results of Summarization Experiments
" prefixing does not outperform full fine tuning in this case will we only have like three percent parameters?" "We still see gains in low data settings and we still see gains when there is a topic mismatch or i.e. in the extrapolation settings" 'I think that this is the cause is related to the encoder capacity so basically when we run an ablation experiment where we allow the encoder parameters to be fine to end then the result gets improved,' he says. "'Prefixed summarization' would actually kind of avoid the overfitting issue or like alleviate the overfitting problem in both summarization and tables attacks," she adds.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.