MLOps.community  cover image

LLM Distillation and Compression // Guanhua Wang // #278

MLOps.community

00:00

Optimizing Language Model Training with DeepSpeed

This chapter explores the functionalities of DeepSpeed, a PyTorch-based library, focusing on its memory-efficient training methods using the zero optimizer. It discusses advanced techniques such as data offloading and zero infinity, which facilitate training large language models on limited hardware. Additionally, the chapter covers the phases of model training, the trade-offs of quantization, and the limitations of model distillation in alternative contexts.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app