The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

00:00

Intro

This chapter delves into the complexities of optimizing computations for large language models, emphasizing the encoding and decoding stages of query processing. It also examines the constraints of computational power and bandwidth, drawing from the speaker's engineering and AI experience at Qualcomm.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app