

Episode 2289: Gary Marcus on how Artificial General Intelligence (AGI) is, in the long run, inevitable
Dec 31, 2024
Gary Marcus, a prominent AI skeptic and former CEO of Geometric.AI, discusses the inevitability of Artificial General Intelligence (AGI) by 2100. He highlights the urgent need for AI regulation before technology surpasses human control. The conversation dives into the complexities of the current AI landscape, the alignment problem, and the rivalry among tech giants like OpenAI and Google. Marcus critiques the focus on generative models and emphasizes that the path to AGI might require yet-to-be-invented approaches, urging a balance of methodologies in future development.
AI Snips
Chapters
Books
Transcript
Episode notes
Gary Marcus's Skepticism
- Gary Marcus clarifies his AI skepticism, believing AGI is inevitable but not imminent.
- He is skeptical of generative AI's usefulness and its potential for misuse.
Generative AI's Limitations
- Generative AI, championed by Hinton, is not the path to AGI, according to Marcus.
- He believes current machines lack true understanding and are merely statistical predictors.
Neurosymbolic AI
- Generative AI is a useful tool, but not a complete solution for AGI, says Marcus.
- He advocates for neurosymbolic AI, combining statistical and abstract reasoning approaches.