2min chapter

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

CHAPTER

The Impacts of Pipe Line Parallelism on Batching Performance

In addition to the impacts that you mention at the infera structure level, do you also see impacts that the model level, meaning changes in the way the model converges on performance and all those things that we associate with tweaking batching perometers? Right? So it depends. What we typically do is we choose a pipe line schedule that actually still obeys the semantics of the model. That's not how we've got our best results. Our best results with pypine paralism, we have been achieving when using something that is sematically equivalent to not using pipine parallelism. However, you're a hundred % correct that it does have an impact for the batch size.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode