AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Problem With Expected Utility Functions in Deep Learning
In general I think that expected utility is not the correct sort of description to use for values so one of the classic objections to like expected utility theory is that anything can be viewed as maximizing a utility function right rocks bricks or like your brain is maximizing its compliance with the loss of physics whatever. What we want is some simple utility function that explain that like matches our preferences or something like that but you can also get that out of a deep learning system by turning it into an inner optimizer. The energy function isn't like defined over world model stuff it's like you'd look at the energy function and have no idea about the network's preferences they'd be no more informative than the weights of the