I'm instinctively skeptical of the idea that some abstract reasoning ability will necessarily deed machines towards any moral boundaries. If we build super smart machines give them a lot of power and they have no norms and values and laws then we're in deep trouble whether we can solve that problem depends on whether we can do a really good version of a really hard problem, he says. We need a real plan here and I don't think we have it but there's hope this might be a solvable problem according to Yowkowski.