In some ways, the 'symbolic approach' to artificial intelligence is kind of dumb. It's about having functions where you have variables that you bind to particular instances and calculate the values out. And so it's really a different, almost thesis about what cognition should be. But i think the right thesis is actually, our brains, anyway, can do both. We can do the logical, abstract stuff even at seven months old. So there's this ability for us to do abstraction, which allows us to be computer programmers or to do logic. And there's also this like, heavy statistical analysis that we humans do - but we're not quite as good as the machines at it.
Artificial intelligence is everywhere around us. Deep-learning algorithms are used to classify images, suggest songs to us, and even to drive cars. But the quest to build truly “human” artificial intelligence is still coming up short. Gary Marcus argues that this is not an accident: the features that make neural networks so powerful also prevent them from developing a robust common-sense view of the world. He advocates combining these techniques with a more symbolic approach to constructing AI algorithms.
Support Mindscape on Patreon.
Gary Marcus received his Ph.D. in cognitive science from MIT. He is founder and CEO of Robust.AI, and was formerly a professor of psychology at NYU as well as founder of Geometric Intelligence. Among his books are Rebooting AI: Building Machines We Can Trust (with Ernest Davis).
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.