Papers Read on AI cover image

DoRA: Weight-Decomposed Low-Rank Adaptation

Papers Read on AI

00:00

Efficient Fine-Tuning with Weight Decomposition and PefT Methods

This chapter introduces DORA, a weight decomposition method for fine-tuning in NLP and vision language tasks using LLM and LVLM backbones. DORA achieves impressive results on various benchmarks while maintaining inference efficiency, surpassing previous methods like LORA.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app