The InfoQ Podcast cover image

Hien Luu on ML Principles at DoorDash

The InfoQ Podcast

CHAPTER

The Future of Deep Learning

GV: How do we support model training at scale and using GPUs, for example? And then similarly on the prediction side, how do we support prediction with low latency for those large complex deep learning models. So that is going to be our focus in 2023; it's also coming back to the ROI of a use case. It makes sense to leverage GPU or not based on the impact that use case might have to the company.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner