
Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference
Generally Intelligent
The Inductive Bias in Language Model Training
The inductive bias, you think from the blocks bar, the blocks bar city, and like this particular set up for language models is not quite right. And so you have to go back to the larger denser matrices at some point anyway. So it's maybe not, maybe you're not buying a lot in this case by having this extra complexity. There's still a lot of work to be done.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.