The Everything Feed - All Packet Pushers Pods

HN793: A Deep Dive Into High-Performance Switch Memory

Aug 22, 2025
LJ Wobker, a Principal Engineer at Cisco with expertise in high-performance memory systems, dives deep into the intricacies of high-performance switch memory. He discusses the differences among TCAM, SRAM, and DRAM, highlighting their trade-offs in networking functions. The conversation touches on memory management in switches and the complex challenges of packet processing. LJ also explains how TCAM enables fast data retrieval, the significance of ASIC interfaces, and the evolving demands of network performance in the face of rising memory bandwidth requirements.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Packet Forwarding Is A Chain Of Lookups

  • A switch parses incoming frames, builds lookup keys from headers and metadata, and applies a chain of lookups to decide forwarding and transformations.
  • Each added feature increases parsing and state stored, multiplying memory and lookup needs.
INSIGHT

Parsing Depth Is A Hardware Trade-Off

  • Parsers set pointers into a packet and branch by ethertype, and parser size is physically limited on high-speed silicon.
  • Hardware chooses how deep to parse; deeper parsing costs logic, area, or throughput.
INSIGHT

Every Feature Widens The Lookup Key

  • Multiple features (ACLs, QoS, mirroring) add metadata and widen the lookup key, often to hundreds of bits.
  • Bigger keys mean wider buses, more wires through the pipeline, and significant area/power cost.
Get the Snipd Podcast app to discover more snips from this episode
Get the app