TalkRL: The Reinforcement Learning Podcast cover image

Rohin Shah

TalkRL: The Reinforcement Learning Podcast

CHAPTER

Scaling Up Deep Learning

When we're dealing with a high dimensional state, there's just a ridiculous number of permutations and situations. I think that basically, this particular approach, you mostly just shouldn't try to scale up in this way. It's more meant to be like firstquake sanity check That is already quite hard to pass a for current systems. We're talking scores like 70%. Once yo get to like 90, 99 %, than it's like o, that's the point itlike start thinking about scaling up.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner