The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Dataflow Computing for AI Inference with Kunle Olukotun - #751

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

00:00

Optimizing for Tokens-per-Second and Latency

Kunle discusses token/sec and latency-throughput trade-offs, showing how tensor parallelism and overlapping reduce latency at high throughputs.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app