
Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference
Generally Intelligent
00:00
The Future of Intuitive Learning
There are a couple of ways you can speed up inference. You can use sparsity where you're going to zero out some of the entries in the weight matrix. Instead of using 16 bits, you can use egg bits or four bits. These approaches are emerging and people are definitely paying attention. Maybe they're not kind of widely deployed yet but I think they will be in the future.
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.