TalkRL: The Reinforcement Learning Podcast cover image

Max Schwarzer

TalkRL: The Reinforcement Learning Podcast

CHAPTER

The Importance of Network Scaling in Model Free Agents

There are limits to how big you can make these networks. You can't just take a ResNet 50 or something and put it in BBF and expect a good performance. The real key for that was using the shorter update horizon, n equals 3 instead of n equals 10 or n equals 20. So there's a lot of room even before we start talking about transformers to push that horizon out to larger and larger networks.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner