AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Importance of Quantization in Language Models
The L and model, because there's a more abstraction form of what you think about language tokens, language, the symbols, actually we can compress much more aggressively down to four bits. But still with that said, why is it so different? Think about these things as all being vectors, you know, representation at some point. Even though we have a very early stage, we believe we have a potential maybe further reduce even beyond four bits. We're going to bring a lot of new announcements to the industry. Eventually, we can bring the value to the industry and also make sure we're on the frontier to bring all these large language models to the device.