AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Orthogonality Thesis and the Argument for Instrumental Convergence
I accept the claim that in principle, you could make an AI system which would have a propensity to try and do very bad things to people. That's just like physically possible. I think I have quibbles I can give, but I'll maybe just say step two, I think basically correct on the, on the key points. And then step three, which is this idea of instrumental convergence,. I don't necessarily disagree with the claim itself, but the issues, I think there's a big gap in jumping from a statistical claim that the majority ofAI systems are likely to behave badly to the conclusion that we will tend to or disproportionately make such systems.