AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Specifying the Positive Effects of Achieving a Task
In the future, agents could be programmed to achieve tasks. The point of achieving the task is so that it'll make some good things happen. But how do you account for down stream effects? Is there a right way to define impact measures in practice or theoretically?