In this episode of Gradient Dissent, Lukas Biewald talks with Tuhin Srivastava, CEO and founder of Baseten, one of the fastest-growing companies in the AI inference ecosystem. Tuhin shares the real story behind Baseten’s rise and how the market finally aligned with the infrastructure they’d spent years building.
They get into the core challenges of modern inference, including why dedicated deployments matter, how runtime and infrastructure bottlenecks stack up, and what makes serving large models fundamentally different from smaller ones.
Tuhin also explains how vLLM, TensorRT-LLM, and SGLang differ in practice, what it takes to tune workloads for new chips like the B200, and why reliability becomes harder as systems scale.
The conversation dives into company-building, from killing product lines to avoiding premature scaling while navigating a market that shifts every few weeks.
Connect with us here:
Tuhin Srivastva: https://www.linkedin.com/in/tuhin-srivastava/
Lukas Biewald: https://www.linkedin.com/in/lbiewald/
Weights & Biases: https://www.linkedin.com/company/wandb/