
Sid Sheth
Founder and CEO of d-Matrix, an AI inference chip company challenging GPU incumbents; discusses the company's Corsair chip and inference-focused strategy.
Top 3 podcasts with Sid Sheth
Ranked by the Snipd community

24 snips
Apr 30, 2025 • 55min
#251 Sid Sheth: How d-Matrix is Disrupting AI Inference in 2025
In a captivating discussion, Sid Sheth, CEO and Co-Founder of d-Matrix, dives into how his startup is transforming AI inference. He highlights the significance of inference over training for the future of AI and how d-Matrix’s Corsair PCIe accelerator outshines NVIDIA's offerings. Sid explains the role of in-memory compute technologies, the shift towards heterogeneous AI infrastructure, and the global landscape of inference chips. With insights from his extensive semiconductor background, he reveals his vision for creating a formidable competitor to industry giants.

21 snips
Nov 14, 2025 • 51min
Former Twitter CEO Building AI Web Infrastructure, Waymo’s Freeway Expansion | Nov 14, 2025
Parag Agrawal, former CEO of Twitter and now founder of Parallel Web Systems, shares insights on building AI web infrastructure and the future evolution of AI agents. Sid Sheth, CEO of d-Matrix, discusses their innovative inference chip challenging the GPU market, while Cory Weinberg reveals the boardroom turmoil at Grindr amid a takeover offer. The conversation also touches on Waymo's freeway expansion and its implications for autonomous vehicles, emphasizing the shift in public acceptance and safety perceptions.

8 snips
Aug 6, 2025 • 24min
Confronting AI’s Next Big Challenge: Inference Compute
In a dynamic conversation, Sid Sheth, Founder and CEO of d-Matrix, dives into the complexities of AI inference. He emphasizes that inference isn't a one-size-fits-all challenge and requires specialized hardware for different needs. Sid introduces d-Matrix's innovative modular platform, Corsair, designed to minimize memory-compute distance for faster performance. He also explores the parallels between human learning and AI deployment, and stresses the necessity for tailored infrastructure to enhance enterprise AI integration.


