AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Using a Model Inference to Scale Out a Scalable Environment
Eric: I have always had this disdain for like maintaining a whole bunch of like local environments as well. But with modal, you can just add that as a dependency in the modal function and that runs in the cloud in its own container. So I actually never even have to install that locally. People started leveraging that for like building like full blown web apps on mobile. We've seen a lot of traction on online inference and model deployments.