
Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference
Generally Intelligent
00:00
The Evolution of Butterfly Matrixes in Machine Learning
We found that if you compose a bunch of these butterfly matrices, they can represent any fast transform. And so when we put that into model training, so we parameterize the weight matrixes. They have certain nice inductive biases, like for audio,. Because they can represent things like Fourier transforms, they're quite well suited for audio and speech recognition models. We've been thinking about how to make these things hardware efficient. It's that work well on GPU.
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.