

Data Center Metrics: Gigawatts and Manhattans
8 snips Aug 12, 2025
AI is driving an unprecedented demand for data centers, and the Metrics Brothers break down essential metrics for understanding this need. They explore FLOPS, power consumption, and how the first multi-GW data center, Prometheus, is setting new standards. Comparisons to Manhattan reveal the immense scale of these facilities. Discussions on energy requirements highlight the staggering projections for future demands and operational efficiency. Tune in as they navigate the implications of these trends on AI infrastructure!
AI Snips
Chapters
Transcript
Episode notes
Measure What Actually Matters
- New AI-era metrics are emerging and we must decide which actually matter versus what is easy to count.
- Ben Evans' point: measure what affects outcomes, not just what is measurable.
FLOPS Are The New Compute Currency
- FLOPS replace MIPS as the central compute metric for AI because models are math-heavy.
- Training a foundation model requires astronomically more FLOPS than a single inference.
Training Vs Inference: A 10-Billionx Gap
- Training GPT-3 required ~3.14×10^23 FLOPS while a typical inference uses ~3.5×10^13 FLOPS.
- Building the model was roughly 10 billion times more compute-intensive than a single inference.