AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Building and Serving Models with Kubeflow
This chapter explores the process of serving models with Kubeflow, covering topics such as creating a custom model service class, handling user requests, loading models from S3, and preparing models for deployment. It discusses setting up a model serving process, containerizing the process with Docker, handling errors, and deploying the model as a service using KF Server. The speaker emphasizes the importance of details in the deployment process and provides insights into scaling, building, and debugging for a seamless model serving experience.