
11 - Attainable Utility and Power with Alex Turner
AXRP - the AI X-risk Research Podcast
Is Human Approval of an Action Based on a I Gaining Some Power?
Human approval of an action is just like a cue function, some measure of how much it achieves. The agent doesn't really know how to make many power seeking things happen. And if you have your agent either not want to or not be able to conceive of power seeing plants, right? Then you're going to ten me fine from that perspective. I supposed o just like ti, this actionn approval thing may be just being like normal optimization ofyour utility functions over world states, kind of in disguise.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.