MLOps.community  cover image

Boosting LLM/RAG Workflows & Scheduling w/ Composable Memory and Checkpointing // Bernie Wu // #270

MLOps.community

00:00

Navigating AI Workflow Scheduling

This chapter examines the intricate challenges of scheduling workflows and managing computational resources within large language models (LLMs) and retrieval-augmented generation (RAG) systems. It discusses the limitations of existing tools like Kubernetes in addressing these complexities, emphasizing the need for innovative and self-adaptive scheduling solutions. Key topics include GPU and memory allocation, economic implications of resource management, and the ongoing evolution of AI deployment technologies.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app