Machine Learning Guide cover image

MLA 016 AWS SageMaker MLOps 2

Machine Learning Guide

00:00

Using Batch Transform to Scale Your Machine Learning Models

You don't have to deploy your model. You can use something called batch transform. Let's say, if you're not using a whole lot of cpu or a ram, you can use elastic inference to attach a gpu or an i n f chip to your instance. So you can have more fine grained control over the type of environment that you set up. It's very similar to using a w s batch. A w s batch is a dedicated service for running one off jobs using a docker container on whatever e c two instance you want. But it's all tied in to the sage tooling so you get all the other features that i've mentioned before.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app