Generally Intelligent cover image

Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

Generally Intelligent

00:00

The Future of Inference

In the short term, I think there's going to be a lot more focus on inference as these models are being deployed. Apple has put in neural engines on the iPhones to do model inference already and we're going to see more of that. Inference could also mean like personalization. So how do you design a model that can take in really long context for let's say chatbot? And so when it comes to model design, I think people are going to think more about inference.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app