AXRP - the AI X-risk Research Podcast cover image

11 - Attainable Utility and Power with Alex Turner

AXRP - the AI X-risk Research Podcast

CHAPTER

The Reward Function for an Ai System Isn't Optimistic

Power maximization behavior seems like it leads to some bad outcomes, or some oucomes that i really don't want my ai to have. But also, there's some reward function for an a I system that would incentivis the the behavior that want im imagining. What i think you learn if you learn that the agent is seeking power, at least in the intuitive sense, is that a lot of these outcomes are now catastrophic.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner