AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Reward Function for an Ai System Isn't Optimistic
Power maximization behavior seems like it leads to some bad outcomes, or some oucomes that i really don't want my ai to have. But also, there's some reward function for an a I system that would incentivis the the behavior that want im imagining. What i think you learn if you learn that the agent is seeking power, at least in the intuitive sense, is that a lot of these outcomes are now catastrophic.