The Gradient: Perspectives on AI cover image

Daniel Situnayake: AI on the Edge

The Gradient: Perspectives on AI

00:00

The Future of Machine Learning

The work that's been done around model compression. quantization is just this idea that typically you're training a model, the weights and other parameters are expressed as 32 bits floating point values. But it turns out that because deep learning is awesome, you can actually reduce the precision of these numbers,. And often not really see much deterioration in the performance of your model. So obviously the size is getting shrunken down by a factor of four.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app