
Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference
Generally Intelligent
How to Preserve Quality if We Are Zero Up With the Percent of the Entries
I think one of these models are over-parameterized. They're more parameters than necessary, which is helpful for training. But it means that you're making two similar connections that you could have done with one. And so maybe intuitively, some tokens, they're not really using the full capacity of a model.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.