AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is Human Approval of an Action Based on a I Gaining Some Power?
Human approval of an action is just like a cue function, some measure of how much it achieves. The agent doesn't really know how to make many power seeking things happen. And if you have your agent either not want to or not be able to conceive of power seeing plants, right? Then you're going to ten me fine from that perspective. I supposed o just like ti, this actionn approval thing may be just being like normal optimization ofyour utility functions over world states, kind of in disguise.