AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is It Useful to Have an Agent That Preserves Its Ability to Achieve a Wide Range of Things?
This is called optimal policies. Tends to seek power by yourself. Logan smith, rohan shaw, andrew critch and presa totapolly. So i guess to start off with, what's the key question this paper is trying to answer? Key question is, what does optimal behavior tend to look like? Are ther regularities? And if so, for a wide range of different goals you could pursue, if you ask all these different goals whether it's optimal to die, theyre most likely going to say no. My you know, is that true formally, and if so, why, under what conditions? I answers that in the setting of markof decision