Software Huddle cover image

Fast Inference with Hassan El Mghari

Software Huddle

CHAPTER

Optimizing Inference Speed in AI

This chapter explores the critical role of speed in inference engines used with AI and LLMs. It discusses innovative strategies for enhancing performance, including speculative decoding and the Together Kernels Collection, while also addressing the significance of fine-tuning and prompt engineering. The chapter concludes with insights into customer preferences for selecting machine learning models, emphasizing the use of a combination of open-source models for better application outcomes.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner