Latent Space: The AI Engineer Podcast cover image

Everything you need to run Mission Critical Inference (ft. DeepSeek v3 + SGLang)

Latent Space: The AI Engineer Podcast

00:00

Advancements in Mixture of Experts and Frameworks

This chapter delves into fine-grained mixture of experts (MOE) models, exploring their significance and challenges in the machine learning landscape. It discusses the evolution of operational technologies like DeepSeek and the Trust SDK, emphasizing the importance of framework flexibility to meet diverse customer needs. Additionally, the chapter highlights the development of the SGLang framework, its performance enhancements, and the critical role of transparency and optimization in serving complex inference workloads.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app