Gary Marcus, a cognitive science professor emeritus and author of "Taming Silicon Valley," offers a critical view on AI's trajectory towards 2025. He highlights the glaring limitations of large language models in reasoning and reliability. Emphasizing the need for diverse scientific approaches, Marcus argues against the narrow focus on deep learning, advocating for a fusion of symbolic AI and neural networks. He also warns about the urgent need for effective AI regulation to prevent risks like misinformation and discrimination in hiring.
While advancements in AI, particularly predictions about AGI, continue to excite the field, existing models like LLMs struggle significantly with reasoning, reliability, and factual accuracy.
Gary Marcus emphasizes the need for a broader and more responsible approach to AI research, advocating for diverse scientific ideas beyond just deep learning to truly advance the field.
Deep dives
The Future of AI in 2025
Artificial intelligence is expected to significantly advance in 2025, with discussions centered on artificial general intelligence (AGI). Some experts believe we will witness triumphs claimed in AI development, particularly regarding AGI, which could lead to confusion over its true nature and capabilities. In contrast, the reality includes continued limitations of existing AI technologies, specifically large language models (LLMs). Many in the field argue that while LLMs can generate broad but shallow responses, they lack the depth and reliability required for truly intelligent systems.
Critique of Current AI Models
The limitations of current AI models, particularly LLMs, are a topic of concern among experts. These models often struggle with factual accuracy and reasoning, leading to what is termed 'hallucination' where incorrect information is presented confidently. Despite their ability to generate text and ideas, they fail to perform well when tasked with logical reasoning or understanding complex contexts. Critics like Gary Marcus argue that continued reliance on LLMs is a detour from achieving genuine AI and that a broader, more responsible approach to AI development is necessary.
The Challenge of AGI Claims
A significant issue with the advancement of AI technologies is the potential for misleading claims regarding the achievement of AGI. As enthusiasm around AI grows, many voices may declare success prematurely, muddying the definition and progress of AGI itself. This debate often centers around differing interpretations of AGI, such as its flexibility versus its economic utility. Critics warn that the pursuit of superficial milestones obscures the deeper understanding required to achieve genuinely advanced AI systems.
Looking Ahead: The Role of Public Skepticism
As the landscape of AI evolves, public skepticism towards the technology's efficacy and safety remains crucial. Experts emphasize the importance of transparent discussions regarding the deployment of AI systems, suggesting that meaningful checks and balances must be established before technologies are widely adopted. Potential pitfalls include privacy violations and discriminatory practices in applications like job evaluations. Engaging the public in conversations surrounding these developments is essential to ensure that AI technologies align with societal values and do not perpetuate harm.
From the release of AI agents to claims that artificial general intelligence has (finally!) been achieved, 2025 will probably be another blockbuster year for AI. That sense of continuous progress is not shared by everyone, however. Generative AI, based on large language models (LLMs), struggles with reasoning, reliability and truthfulness. While progress has been made in those domains, sceptics argue that the limitations of LLMs will fundamentally restrict the future of AI.
In this episode, Alok Jha, The Economist’s science and technology editor, interviews Gary Marcus, one of modern AI’s most energetic critics. They discuss what to expect in 2025 and why Gary is pushing for researchers to work on a much wider range of scientific ideas (in other words, beyond deep learning) to enable AI to reach its full potential.
Gary Marcus is a professor emeritus in cognitive science at New York University and the author of “Taming Silicon Valley”, a book advocating for a more responsible approach to the development of AI.