Generally Intelligent cover image

Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

Generally Intelligent

CHAPTER

The Future of Inference

In the short term, I think there's going to be a lot more focus on inference as these models are being deployed. Apple has put in neural engines on the iPhones to do model inference already and we're going to see more of that. Inference could also mean like personalization. So how do you design a model that can take in really long context for let's say chatbot? And so when it comes to model design, I think people are going to think more about inference.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner