Generally Intelligent cover image

Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

Generally Intelligent

CHAPTER

The Future of Flash Tension

I think there are a bunch of directions that I'm pretty excited about. So on the system side, I think there's a lot of work to be done to kind of have an entire stack that's really efficient. The PyTorch folks have done an amazing job with PyTorch 2.0 where they can now capture the graph. They can generate efficient code in let's say Triton. And then the Triton compiler would then generate low level code that runs on GPUs or other kinds of devices. We'll see more of this integration. There's another area that may be well suited to long context applications: chatbot personalization.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner