Is there a way of letting computers figure out the same kind of common sense that we have? I take a view that, i think it's a little like what kant was trying to say in the critique of pure reason. If you don't have prior notions about enduring objects that you're talking about, then you just, you're just in correlation soup. So i don't fully have an answer to the question that you posed a minute ago. But my view is you can learn a lot, but that you need a framework to learn, to imbed that knowledge. In fact, i think g t is a brilliant experiment, unintentional, but brilliant.
Artificial intelligence is everywhere around us. Deep-learning algorithms are used to classify images, suggest songs to us, and even to drive cars. But the quest to build truly “human” artificial intelligence is still coming up short. Gary Marcus argues that this is not an accident: the features that make neural networks so powerful also prevent them from developing a robust common-sense view of the world. He advocates combining these techniques with a more symbolic approach to constructing AI algorithms.
Support Mindscape on Patreon.
Gary Marcus received his Ph.D. in cognitive science from MIT. He is founder and CEO of Robust.AI, and was formerly a professor of psychology at NYU as well as founder of Geometric Intelligence. Among his books are Rebooting AI: Building Machines We Can Trust (with Ernest Davis).
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.