AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Aspects of the Utility Maximization Framework
i think the biggest value loss isn't going to come from broken vases, but it's going to coming from a i seeking power and taki it from us. And in that situation, you basically want the side measure to stop the agent from wanting to take power. But i'm leaning against there being a clean way of doing that through the utility maxinization framework right now. Is more something i explored in my sequence, reframing impact on the alignment form.