AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Multi-Model Endpoints Support for GPU
In MME for each endpoint that customers provision, we have a shared free defense instances behind. And then customer tells us, hey, I want this instance type, I want to provision, let's say, 10 instances for it. And then what MME tries to do is you'll also be telling us the models where they are in, right? So MME loads and unloads models on the shared fleet of instances that you have provisioned based on the traffic that we see. Then it tries to optimize for the cost by understanding your traffic pattern.