

487. Challenging AI’s Capabilities with Gary Marcus
Dec 6, 2024
In a riveting discussion, Gary Marcus, an Emeritus Professor of Psychology at NYU and AI expert, challenges the prevailing misconceptions of artificial intelligence. He highlights the 'gullibility gap' where people overestimate AI's capabilities and stresses the urgent need for regulatory frameworks akin to those in pharmacology. Marcus critiques current AI models, advocating for a balance between deep learning and traditional programming. He also delves into the societal impacts of generative AI, including copyright dilemmas and the necessity of critical thinking in education.
AI Snips
Chapters
Books
Transcript
Episode notes
Tesla Summon Feature Flaw
- Tesla's "summon" feature, intended for autonomous driving across the country, has faced limitations.
- At an air show, a Tesla using this feature crashed into a jet, highlighting its inability to recognize objects outside its training data.
Cruise's Teleoperation Reliance
- Cruise, a GM self-driving car company, had more remote operators than cars on the road, demonstrating reliance on human oversight.
- This reliance on teleoperation reveals the current limitations of fully autonomous driving technology.
AI Optimism and Pessimism
- Gary Marcus is pessimistic about pure large language models' potential, believing more data won't solve core issues.
- He remains optimistic about achieving better AI through alternative approaches like neurosymbolic hybrids, combining classical AI with neural networks.