TalkRL: The Reinforcement Learning Podcast cover image

Sven Mika

TalkRL: The Reinforcement Learning Podcast

00:00

Scaling a Single Machine With Rlib?

We've seen users use hundred and more workers. Wek we've run experiments with 250 at onepala ibat. So these run on yah eally large classes. Would like one head note that has a couple gpus. And then you have a dozens of small orsep machines, so that these disenvironed workers can manon those. The the other axis that comes in here for for scaling is the hypeparent,. A tuning axis, though. This is is, this could be like a single job, right? You can think that that this becomes even even largera job. No doubt that you'll get that and more done a but to be fair to

Play episode from 10:56
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app