The Gradient: Perspectives on AI cover image

Andrew Feldman: Cerebras and AI Hardware

The Gradient: Perspectives on AI

00:00

Using Distributed Computing to Train Large Models

When you get to about a billion parameters, everybody is running data parallel and then bang, you run into model parallel. And that's why just getting one of these networks to train is a publication. I mean, how silly is that if you step back? This ought to be as easy as selling chicken dinners. If you have 6 or 8 million for infrastructure, RBS can be. It ought not to take months. Right?

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app