
Semi Doped Nvidia "Acquires" Groq
8 snips
Jan 5, 2026 The discussion dives into Nvidia's unconventional acquisition of Groq and the confusion it sparked. Expectations around GPUs and HBM are challenged as the hosts explore Groq's ultra-low latency architectures and the realities of SRAM versus HBM. They highlight unique use cases for LPUs, from ad personalization to real-time translation in robotics. Insights into Nvidia's strategy reveal an expanding focus on workload-specific optimizations, affirming that while GPUs aren't obsolete, LPUs serve a distinct purpose in inference technology.
AI Snips
Chapters
Transcript
Episode notes
Not A Typical Acquisition
- NVIDIA's deal for Groq was a non-traditional acquisition focused on licensing IP and moving key employees.
- The structure sparked debate about startup payouts and employee outcomes in acqui-hire deals.
SRAM Avoids HBM Bottlenecks
- Groq's LPUs use on-chip SRAM instead of HBM, avoiding HBM supply and packaging bottlenecks.
- SRAM's capacity limits make LPUs impractical for very large models that need terabytes of memory.
Compiler-Driven Determinism
- Groq's LPU is VLIW and compiler-scheduled, moving runtime complexity into software.
- That design yields ultra-low latency and determinism but makes compilation and programming much harder.
