
 Babbage from The Economist (subscriber edition)
 Babbage from The Economist (subscriber edition) Gary Marcus: a sceptical take on AI in 2025
 Jan 15, 2025 
 Gary Marcus, a cognitive science professor emeritus and author of "Taming Silicon Valley," offers a critical view on AI's trajectory towards 2025. He highlights the glaring limitations of large language models in reasoning and reliability. Emphasizing the need for diverse scientific approaches, Marcus argues against the narrow focus on deep learning, advocating for a fusion of symbolic AI and neural networks. He also warns about the urgent need for effective AI regulation to prevent risks like misinformation and discrimination in hiring. 
 AI Snips 
 Chapters 
 Books 
 Transcript 
 Episode notes 
LLMs: A Detour in AI
- Large language models (LLMs) are a detour in AI development, excelling at brainstorming but lacking reasoning and reliability.
- LLMs struggle with factuality, hallucinations, and generalizability, hindering commercial adoption and safety.
Gary Marcus' AI Usage
- Gary Marcus primarily uses generative image programs for amusement and to monitor AI progress, highlighting their limitations.
- He avoids using LLMs for writing due to their bland prose and lack of trustworthiness.
2025 AI Predictions
- In 2025, many will falsely declare Artificial General Intelligence (AGI) has arrived, confusing shallow, broad intelligence with true AGI.
- AI agents will be hyped but remain unreliable, making impressive demos while still fabricating information.




