Machine Learning Guide cover image

MLA 014 Machine Learning Hosting and Serverless Deployment

Machine Learning Guide

00:00

Using a W S Batch to Run a Machine Learning Model

Sage maker is the end to end training and deployment. The traditional training only offering is something called a w s batch. A ws batch lets you run a docker container to completion, and then it dies. It's compatible with the elastic inference p ok. So even though the ge maker deployment of your model is a forty % cost on whatever instance you're using, and is always up, you can still shave some cost off by not using one of the g p instances.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app