Machine Learning Guide cover image

MLA 014 Machine Learning Hosting and Serverless Deployment

Machine Learning Guide

00:00

The Cost of Inference of a Machine Learning Model on Lamda

Some companies have found that even though there is a reduced performance of the models compute power on lamda, it is sufficient for their needs. But if you could potentially have lamda attached to a gpu such that it would only run a machine learning model inference when called and then taken off line, you could save a immense amount money. You can't use a g p on lamda. However, some people are still using lamda none the less for their machine learning deployments. And again, they have to use e f s attached to lamda so that your lamdas function has access to the file system.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app