AXRP - the AI X-risk Research Podcast cover image

11 - Attainable Utility and Power with Alex Turner

AXRP - the AI X-risk Research Podcast

CHAPTER

Is Objective Maximization a Bad Frame?

Human goals are not necesarily very well modelled as just, you know, objective maximization. In some sense, they trivially have to be. I don't feel like it's a very good specification language for these agents. And so the idea is that by having the agent preserve its ability to do a wide range of different objectives, it'll also,. perhaps accidentally, preserve its right objective - though we can't begin to specify what that right objective is.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner