AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Problem With Reward Functions
There's been a lot of discussion and like a decent amount of empirical work on this, but we were trying to approach it theoretically. I feel like it's pretty close to one of the papers that were published before where they have this like performance metric and then there's like the reward metric. And so you can say like, you know, this reward function only really expresses my preferences about like this narrow set of behaviorsI don't really care about behaviors outside of that. But if we want to keep using reward functions we need to just think about them as sort of like a hack that we need to find some other way of dealing with or maybe strengthen the definition somehow or tweak it