487. Challenging AI’s Capabilities with Gary Marcus
Dec 6, 2024
auto_awesome
In a riveting discussion, Gary Marcus, an Emeritus Professor of Psychology at NYU and AI expert, challenges the prevailing misconceptions of artificial intelligence. He highlights the 'gullibility gap' where people overestimate AI's capabilities and stresses the urgent need for regulatory frameworks akin to those in pharmacology. Marcus critiques current AI models, advocating for a balance between deep learning and traditional programming. He also delves into the societal impacts of generative AI, including copyright dilemmas and the necessity of critical thinking in education.
The discussion emphasizes the 'gullibility gap' where people often overestimate AI's abilities, highlighting the need for critical evaluation of its outputs.
Regulatory oversight for AI, akin to the FDA for pharmaceuticals, is deemed essential to mitigate risks like misinformation and discrimination.
Current AI models lack scientific rigor and operate like 'alchemy', necessitating a shift towards integrating classical programming and deep learning for better reliability.
Deep dives
The Evolution of Human Cognition
Human cognition has evolved with several limitations, which are often described as local optima rather than optimal solutions. This notion is highlighted by the contrast between human cognitive mechanisms and AI capabilities, showing that while humans have their biases and irrational tendencies, they still possess unique flexibility and adaptability that machines currently lack. For example, cognitive biases like confirmation bias highlight how humans sometimes mistakenly process information, leading to irrational decisions, but this very capacity for flexible thinking remains unparalleled in AI systems. The idea of humans being a 'low bar' in intelligence emphasizes the significance of understanding cognitive limitations in both human and artificial reasoning.
The Gullibility Gap in AI Perception
There exists a pervasive gullibility gap where people overestimate AI's abilities, often viewing machine outputs with unwarranted trust. This is rooted in a historical context where early AI models, such as ELIZA, demonstrated how individuals could easily mistake basic algorithms for more sophisticated intelligence. The current enthusiasm surrounding generative AI perpetuates this illusion, as many overlook the inherent limitations and errors of these systems. The discussion draws attention to the need for greater awareness of AI's capabilities and limitations, advocating for a more critical evaluation of how AI is perceived and utilized in society.
The Importance of Regulation in AI Development
There is an urgent need for regulatory frameworks that can assess the risks associated with AI technologies, akin to the FDA's role in overseeing pharmaceuticals. This includes concerns over issues like discrimination and misinformation arising from the deployment of generative AI tools, which can produce harmful outputs without accountability. Just as medical practices require scrutiny and approval to prevent public harm, AI should also undergo similar vetting processes before being widely implemented. The analogy highlights that without appropriate governmental oversight, the commercial motivations behind AI development may lead to irresponsible practices that endanger public trust and safety.
The Need for Upgraded AI Approaches
Current AI models often operate like 'alchemy' due to their reliance on vast amounts of data without a principled understanding of the processes at play. This lack of scientific rigor in AI development makes it challenging to address critical issues such as hallucinations, where AI generates incorrect or fabricated information. The discussion underscores that while advances in AI are made, fundamental issues regarding reliability and reasoning still remain unaddressed, indicating a need for a paradigm shift in how AI systems are designed. By integrating classical programming with modern deep learning techniques, there is potential for creating more reliable and capable AI systems.
The Future of AI and Its Societal Impact
The trajectory of AI suggests that while current systems exhibit impressive capabilities, they have considerable limitations that may undermine their societal contributions. For instance, the success of specialized applications, like AlphaFold, reveals the effectiveness of narrow AI in addressing specific problems rather than broader general intelligence. The future of AI is likely to involve hybrid approaches that combine different methodologies to enhance overall performance and reliability. Education and critical thinking will remain vital for individuals to navigate the complexities of AI technology and its implications, ensuring that humans maintain a key role in decision-making processes.
In the last five years, artificial intelligence has exploded but there are a lot of holes in how it works, what it is and is not capable of, and what a realistic future of AI looks like.
Gary Marcus is an emeritus professor of psychology and neural science at NYU and an expert in AI. His books like Taming Silicon Valley: How We Can Ensure That AI Works for Us and Rebooting AI: Building Artificial Intelligence We Can Trust explore the limitations and challenges of contemporary AI.
Gary and Greg discuss the misconceptions about AI’s current capabilities and the “gullibility gap” where people overestimate AI's abilities, the societal impacts of AI including misinformation and discrimination, and why AI might need regulatory oversight akin to the FDA.
30:28: [With AI] I think the last five years have been a kind of digression, a detour from the work that we actually need to do. But I think we will get there. People are already realizing that the economics are not there, the reliability is not there. At some point, there will be an appetite to do something different. It's very difficult right now to do anything different because so many resources go into this one approach that makes it hard to start a startup to do anything else. Expectations are too high because people want magical AI that can answer any question, and we don't actually know how to do that with reliability right now. There are all kinds of sociological problems, but they will be solved. Not only that, but I'm somebody who wants AI to succeed.
Why AI hallucinations can't be fixed until we stop running the system
21:02: Any given hallucination is created by the same mechanism as any given truth that comes out of these systems. So, it's all built by the same thing. With your less-than, greater-than bug, you can work on it selectively in a modular system; you fix it. But the only way you can kill hallucinations is to not run the system. As long as you run the system, you're going to get it sometimes because that's how it works.
Should we help people cultivate their uniquely human common sense?
43:01: In general, critical thinking skills are always useful. It's not just common sense; a lot of its scientific method and reasoning. I think the most important thing that people learn in psychology grad school is whenever you've done an experiment and you think your hypothesis works, someone clever can come up with another hypothesis and point out a control group that you haven't done. That's a really valuable lesson. That breaks some of the confirmation bias and really raises one's level of sophistication. That's beyond common sense. It's part of scientific reasoning; those things are incredibly useful. I think they'll still be useful in 20 years.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.