
Babbage from The Economist (subscriber edition)
Gary Marcus: a sceptical take on AI in 2025
Jan 15, 2025
Gary Marcus, a cognitive science professor emeritus and author of "Taming Silicon Valley," offers a critical view on AI's trajectory towards 2025. He highlights the glaring limitations of large language models in reasoning and reliability. Emphasizing the need for diverse scientific approaches, Marcus argues against the narrow focus on deep learning, advocating for a fusion of symbolic AI and neural networks. He also warns about the urgent need for effective AI regulation to prevent risks like misinformation and discrimination in hiring.
36:53
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- While advancements in AI, particularly predictions about AGI, continue to excite the field, existing models like LLMs struggle significantly with reasoning, reliability, and factual accuracy.
- Gary Marcus emphasizes the need for a broader and more responsible approach to AI research, advocating for diverse scientific ideas beyond just deep learning to truly advance the field.
Deep dives
The Future of AI in 2025
Artificial intelligence is expected to significantly advance in 2025, with discussions centered on artificial general intelligence (AGI). Some experts believe we will witness triumphs claimed in AI development, particularly regarding AGI, which could lead to confusion over its true nature and capabilities. In contrast, the reality includes continued limitations of existing AI technologies, specifically large language models (LLMs). Many in the field argue that while LLMs can generate broad but shallow responses, they lack the depth and reliability required for truly intelligent systems.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.