
 Tech Disruptors
 Tech Disruptors AWS Infrastructure in the Era of AI Workloads
 15 snips 
 Oct 27, 2025  Prasad Kalyanaraman, Vice President of AWS Infrastructure Services, dives into the evolving world of AI workloads. He discusses the vanishing line between AI and traditional workloads, emphasizing the need for fungible clusters that optimize GPU and power use. Prasad reveals AWS's innovative approach to data centers, focusing on retrofitting existing infrastructure for AI rather than building from scratch. He also addresses challenges like supply-chain constraints and the importance of liquid cooling for high-power chips, shedding light on the future of cloud computing. 
 AI Snips 
 Chapters 
 Transcript 
 Episode notes 
Fundamentals Still Rule AI Data Centers
- Data-center fundamentals (power, cooling, networking, security) stay the same even for AI workloads.
- AI adds nuances like denser accelerators and higher per-server power that require targeted adaptations.
Shrink The Blast Radius Of Power Systems
- Reduce blast radius by decentralizing critical systems like UPS and placing battery backups closer to servers.
- This increases resilience and minimizes idle expensive GPU time during power transitions.
AI Is Defined By Need, Not Just Chips
- 'AI workload' is not defined by chip type alone; power, cooling, and network patterns matter more.
- Training needs non-blocking, ultra-cluster networks while inference resembles traditional workloads.
