Changelog Master Feed cover image

End-to-end cloud compute for AI/ML (Practical AI #214)

Changelog Master Feed

00:00

Using a Model Inference to Scale Out a Scalable Environment

Eric: I have always had this disdain for like maintaining a whole bunch of like local environments as well. But with modal, you can just add that as a dependency in the modal function and that runs in the cloud in its own container. So I actually never even have to install that locally. People started leveraging that for like building like full blown web apps on mobile. We've seen a lot of traction on online inference and model deployments.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app