
#8 — Ask Me Anything 1
Making Sense with Sam Harris
00:00
The Limits of the Superintelligent a G I
The control problem, the solution to which would guarantee obedience in any advanced a g i, appears quite difficult to solve. What are the chances that such an entity would rema content to take direction from us? And how could we confidently predict the thoughts and actions of an autonomous agent that sees more deeply into the past, present and future than we do)? If nothing else, the invention of an a g i would force us to resolve some very old and boring arguments in moral philosophy. I think this, at the advent of this technology, would cut through moral relativism like a lazer.
Transcript
Play full episode