Orchestrate all the Things cover image

SageMaker Serverless Inference illustrates Amazon’s philosophy for ML workloads. Featuring Bratin Saha, AWS VP of Machine Learning

Orchestrate all the Things

00:00

How Does SageMaker Deployment Work?

Inference dominates the operational cost of running machine learning models in production. And it's also very important to be able to have many options for inference and you highlight some of those in your introduction. Inference is a very important part because ultimately, you know, when you're doing training, you're just building up the model. But when you're making predictions, that is when you're extracting insights from data.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app