Keen On America

Episode 2534: Why Generative AI is a Technological Dead End

May 15, 2025
Peter Voss, CEO of Aigo.ai and a pioneer in AI who coined 'Artificial General Intelligence' in 2001, critiques generative AI as a misguided venture. He argues that large language models (LLMs) are fundamentally flawed due to their lack of memory and inability to learn incrementally, calling them a technological dead end. Voss warns of an impending bubble burst in the industry, drawing parallels to past economic manias. He advocates for a return to foundational principles in AI development to truly advance towards human-like intelligence.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

LLMs Are A Technological Dead End

  • Large language models (LLMs) are spectacular in tasks like translation and summarization but cannot lead to Artificial General Intelligence (AGI).
  • LLMs lack incremental learning ability and thus represent a technological dead end for AGI.
INSIGHT

LLMs Cannot Learn Incrementally

  • LLMs need all training data upfront and cannot update their models incrementally in real time due to backpropagation constraints.
  • This structural limitation causes models to be read-only and requires expensive retraining for updates.
INSIGHT

LLM Hallucinations Are Inherent

  • Hallucinations are intrinsic to LLMs due to their statistical nature and are unlikely to disappear as models scale.
  • Once committed to an answer, LLMs generate justifications that may be fabricated to maintain conversation coherence.
Get the Snipd Podcast app to discover more snips from this episode
Get the app