
Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference
Generally Intelligent
00:00
The Importance of Static Sparsity in Truning
The team used a technique called sparsity, which means you know beforehand that you want some of these entries to be zero. And dynamic sparsity says that, hey, you're going to change the locations of the non-zero from time to time. In our experience, static sparsity is just a lot easier to work with and in terms of speed, they're easier to implement. So we've been wanting to make a dynamic sparsity work but haven't been able to do so.
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.