The InfoQ Podcast cover image

Victor Dibia on TensorFlow.js and Building Machine Learning Models with JavaScript

The InfoQ Podcast

00:00

How to Optimize Your Deep Learning Model for Size and Latency

Given the same amount of resources, a model that you build and train and run server side will perform the same as a model deployed to a browser. But there are clearly some caveats here. As much as we go in, where's the gadgets? There definitely is some caveats. So deep learning models tend to be large. And so just to give an example, the recent models like BERT and GPT-2, they can be as large as 500 megabytes and more.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app