

The AI Bandwidth Wall & Co-Packaged Optics
24 snips Aug 10, 2025
The podcast dives into the impressive advancements in the semiconductor industry, highlighting the massive leap in GPU performance over the decades. It examines the critical bottlenecks that hinder full utilization of this power, particularly the emerging issues with IO bandwidth. Co-packaged optics is introduced as a promising solution, poised to revolutionize data transfer efficiency in server setups. The discussion also touches on how these developments impact AI applications and the future of semiconductor technology.
AI Snips
Chapters
Transcript
Episode notes
Compute Outpaced I/O
- Compute performance has vastly outpaced I/O bandwidth growth over the past two decades.
- A 2025 NVIDIA B200 GPU has 178,000 times more FLOPs than a top Intel CPU from the late 1990s.
Pluggable Transceivers Do The Conversion
- Pluggable optical transceivers translate between electrical switch signals and fiber optics at the rack faceplate.
- They contain an optical engine (laser, modulator, photodiode) plus an electrical engine interfacing with switch silicon.
CERDES Is Eating Power
- Serializer/deserializer (CERDES) converts parallel switch outputs to serial fiber streams and requires complex equalization.
- Asianometry says CERDES now account for a growing share of switch area and data-center power draw.