Gradient Dissent: Conversations on AI cover image

Exploring PyTorch and Open-Source Communities with Soumith Chintala, VP/Fellow of Meta, Co-Creator of PyTorch

Gradient Dissent: Conversations on AI

00:00

The Importance of Performance Tuning in PyTorch

There must be some mathematical theorem that says you can't calculate the gradient of arbitrary code. PyTorch is a fairly big API surface. It's about 1,000 API functions and then it composes with whether you have a quantized tensor or regular tensor at various other things distributed. So to make a back-end work smoothly for PyTorch takes a lot of work. The large amount of work we do on new platforms, such as AMD or Mac GPUs or other kinds of accelerators such as GPU, is functionality parity.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app