Generally Intelligent cover image

Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

Generally Intelligent

00:00

How to Preserve Quality if We Are Zero Up With the Percent of the Entries

I think one of these models are over-parameterized. They're more parameters than necessary, which is helpful for training. But it means that you're making two similar connections that you could have done with one. And so maybe intuitively, some tokens, they're not really using the full capacity of a model.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app