
Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference
Generally Intelligent
The Right Sparsity Schedule for Language Models
I think the pruning community has been writing lots of papers on what is the right schedule for sparsity. You can increase or decrease sparsity either at the beginning or at the end. So far we've been focusing on more simpler approaches. We just want to do, hey, what if we do this dumb thing of just doing static sparsity? How well does it work? Can we understand the limits of just usingstatic sparsity? And then maybe, hey, like how much dynamic sparsity do we need to add?"
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.