AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Limits of the Superintelligent a G I
The control problem, the solution to which would guarantee obedience in any advanced a g i, appears quite difficult to solve. What are the chances that such an entity would rema content to take direction from us? And how could we confidently predict the thoughts and actions of an autonomous agent that sees more deeply into the past, present and future than we do)? If nothing else, the invention of an a g i would force us to resolve some very old and boring arguments in moral philosophy. I think this, at the advent of this technology, would cut through moral relativism like a lazer.