Practical AI cover image

Stellar inference speed via AutoNAS

Practical AI

00:00

Optimizing Inference in Machine Learning

This chapter explores the significance of inference in the machine learning lifecycle, focusing on the challenges and considerations in deploying models at scale. It addresses operational factors such as latency, throughput, and optimal hardware, particularly in real-time applications like self-driving cars. The discussion advocates for a holistic approach to model development that includes inference requirements from the outset to ensure successful deployment.

Play episode from 02:16
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app