AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is There a Middle Ground to Scaling Up Reaper Learning?
I think it's entirely possible to take that problem and divide it into its constituent parts so that if we're developing an algorithm that is supposed to enable reinforcement learning with language models, well, that can be done with a smaller model. So some dividing the problem appropriately can make this quite feasible. It does seem like in reinforcement learning, the models are much, much smaller than they are in many other parts of machine learning. Do you have any sense for exactly why that is just historical? Is it merely a performance thing?