Latent Space: The AI Engineer Podcast cover image

The Mathematics of Training LLMs — with Quentin Anthony of Eleuther AI

Latent Space: The AI Engineer Podcast

00:00

Understanding Floating-Point Operations in Large Language Models

This chapter explores the distinction between theoretical and actual floating-point operations per second (flops) in training large language models, addressing hardware utilization and inefficiencies, particularly between AMD and NVIDIA GPUs. It also discusses the evolution of computing resources, numerical precision techniques, and the effects of model quantization on performance and memory during training.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app