Latent Space: The AI Engineer Podcast cover image

FlashAttention 2: making Transformers 800% faster w/o approximation - with Tri Dao of Together AI

Latent Space: The AI Engineer Podcast

CHAPTER

Optimizing with FlashAttention 2

This chapter covers the release of FlashAttention 2 and its integration with NVIDIA's Cutlass library, leading to enhanced GPU efficiency for matrix operations. It discusses the implications of hardware dependencies, compiler advancements, and the innovative strategies being pursued by AI hardware companies amidst rapid technological changes.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner