We don't know how to specify these things in terms that learning systems would understand. And we can't really do it entirely with innate set of rules either. There has to be some learning. I'm not worried about robots taking over the world, they've no motivation to do so. They're frankly dumb right now. Getting better at go has not made them any more desirous of human territory. It's fine that we have a few people around thinking about these issues now. Even maybe they never come to pass, ah but it's not an urgent need into it.
Artificial intelligence is everywhere around us. Deep-learning algorithms are used to classify images, suggest songs to us, and even to drive cars. But the quest to build truly “human” artificial intelligence is still coming up short. Gary Marcus argues that this is not an accident: the features that make neural networks so powerful also prevent them from developing a robust common-sense view of the world. He advocates combining these techniques with a more symbolic approach to constructing AI algorithms.
Support Mindscape on Patreon.
Gary Marcus received his Ph.D. in cognitive science from MIT. He is founder and CEO of Robust.AI, and was formerly a professor of psychology at NYU as well as founder of Geometric Intelligence. Among his books are Rebooting AI: Building Machines We Can Trust (with Ernest Davis).
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.