Kubernetes Podcast from Google cover image

LLM-D, with Clayton Coleman and Rob Shaw

Kubernetes Podcast from Google

00:00

Optimizing AI Workloads with LLMD and Inference Gateway

This chapter delves into the features of LLMD and the Inference Gateway, showcasing their importance in enhancing AI operations in Kubernetes clusters. It also illustrates how LLMD integrates successful elements from existing projects to provide a comprehensive open-source solution for efficient inference applications.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app