AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is It Possible to Learn a Reward Function From Human Behavior?
i feel like while relying just on learning a reward function from human behavior, ah, that can then be just perfectly optimized, i think i am fairly confident that that will not work. But it seems likely that there are plans that involve learning what humans want and having better methods do that. Ah, but you do need to account for human viaces at some point. And so they like me to clarify that with you, and so on.