Changelog Master Feed cover image

Large models on CPUs (Practical AI #221)

Changelog Master Feed

00:00

The Importance of Intuition in Quantization

There's one or two layers in some of these that are extremely sensitive for whatever reason to quantization that you can't quantize it. So removing those, then you get 100% recovery. Pruning is much more of a requirement to do training aware on the pruning side,. But you definitely will see this kind of the choices that are made in the hyper parameters that are chosen. Those can significantly affect the recovery and equality. If they were seeing drops in performance, it's primarily because of those choices and those issues and just the wide breaths that's available right now and not knowing how to narrow it down.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app