Generally Intelligent cover image

Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

Generally Intelligent

CHAPTER

The Importance of Optimizing Hardware for Faster Attention

I've been working on follow up on last retention. The original implementation in terms of utilization is already a part of it faster than the baseline. But when you actually measure the utilization of the device, it's not as high as you would expect. So that just says that, hey, there's still some headroom that we can work on. And yeah, I plan to go back and now they understand that our hardware a little bit better.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner