AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Problem With the Instrumental Goal of Maximizing Human Happiness
There's no consciousness implied here, I mean, the lights don't have to be on. It just remains to be seen whether consciousness comes along for the ride or at a certain level of intelligence,. But I think they probably are orthogonal to one another. Intelligence can scale without the lights coming on in, in my view. So let's leave sentience and consciousness aside. And it'd be maximizing human happiness is measured by things like dopamine levels or serotonin levels or whatever, but obviously not a positive outcome. That's an argument that comes out of their orthogonality thesis, which is the goal could be simple and innocuous and yet leading catastrophe.