Gpt three is the system that everyone talks about these days. Type in part of a story and it will continue the story. So, am, it will give you fluent speech. It doesn't actually understand anything about toxicology or why you might die. There's no underlying understanding of what it is talking about. And so people are trying to put all these bandates on top of it to make it less toxic. But we don't how to do that. Just o, well, language is a great example. You can't query gpt and say, 'Are you making a toxic remark? Or you are you know you are?' The technology does not really afford that luxury.
Artificial intelligence is everywhere around us. Deep-learning algorithms are used to classify images, suggest songs to us, and even to drive cars. But the quest to build truly “human” artificial intelligence is still coming up short. Gary Marcus argues that this is not an accident: the features that make neural networks so powerful also prevent them from developing a robust common-sense view of the world. He advocates combining these techniques with a more symbolic approach to constructing AI algorithms.
Support Mindscape on Patreon.
Gary Marcus received his Ph.D. in cognitive science from MIT. He is founder and CEO of Robust.AI, and was formerly a professor of psychology at NYU as well as founder of Geometric Intelligence. Among his books are Rebooting AI: Building Machines We Can Trust (with Ernest Davis).
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.