
The Circuit EP 146: The State of AI Networking with Austin Lyons
18 snips
Dec 22, 2025 Austin Lyons, a technology analyst and writer at ChipStrat, specializes in semiconductor and AI networking. He dives into the shift from traditional cloud networking to GPU-driven AI networks, emphasizing the need for new topologies for large AI models. The conversation contrasts copper and optical solutions, tackling their trade-offs in terms of performance and reliability. Austin also demystifies SerDes, explaining its crucial role in chip communication and system performance. Expect insights on the future of networking in the rapidly evolving tech landscape!
AI Snips
Chapters
Transcript
Episode notes
Networking Shift Driven By AI Workloads
- AI networking has shifted networking from many-user internet traffic to single, massive workloads spread across thousands of GPUs.
- This change forces new topologies and designs similar to HPC rather than traditional cloud networking.
Farming Analogy For Scaling Types
- Austin likens scale-up to multiple combines harvesting one field together for faster completion.
- He compares scale-out to separate combines harvesting different fields in parallel across distance.
Clear Definitions: Scale Up, Out, Across
- Scale up means many GPUs cooperating like one machine with low-latency, high-bandwidth links and pooled memory.
- Scale out is horizontal clusters across racks while scale across connects separate data centers over longer distances.
