Generally Intelligent cover image

Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

Generally Intelligent

CHAPTER

The Future of Language Models

I feel like our unusual or controversial thing is that this cool thing about our current is more. I think my prior is that as long as your market architecture is reasonable and is hardware efficient and you have lots of compute, the model would just do well. This remains to be validated and so on. But in the future we'll see maybe more more model diversity cater to different needs rather than this one architecture that everyone is using.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner