
SageMaker Serverless Inference illustrates Amazon’s philosophy for ML workloads. Featuring Bratin Saha, AWS VP of Machine Learning
Orchestrate all the Things
00:00
Is It Going to Require Code Changes or Is It Going to Be on the Configuration Level?
It's more going to be on the configuration level, but we have different APIs as well. So I think we can offline, we can often give you those details. But we actually made it pretty easy for customers to move from one configuration to the other. All right. And then I guess the last area to touch upon is basically where do you go from from here. You already have quite comprehensive offer. Now with the latest feature that you recently announced going GA and what you also said in the beginning that the way you move forward is by listening to your customers basically. Can you perhaps allude, let's say, not to something specific that you'll be working on, but then some
Transcript
Play full episode