The Josh Bersin Company

AI Agents: Not Always Right But Seldom In Doubt

49 snips
Oct 29, 2025
Delve into the BBC study revealing that 45% of AI-generated news answers are incorrect, highlighting the troubling self-confidence of these systems. Discover the 'polluted corpus' problem, where bad data contaminates outputs. Explore the importance of data quality in corporate settings and learn essential AI thinking skills. Josh shares personal experiences with tools like OpenAI and urges critical thinking in interpreting AI results. He also stresses the need for human oversight to ensure accountability in decision-making.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

High Error Rates In Public AI Agents

  • Public-domain AI agents produced many incorrect answers in the BBC/EBU study, showing limits in news-related queries.
  • The agents sound confident but often return errors, misleading sources, hallucinations, and outdated facts.
ANECDOTE

Personal Distrust From Checkable Mistakes

  • Josh Bersin describes his personal use of OpenAI and Claude for labor-market and company data and losing trust after finding untraceable sources.
  • He routinely validates answers and often finds them incorrect when cross-checked.
INSIGHT

LLMs Are Statistical, Not Logical, Thinkers

  • LLMs statistically predict tokens across massive corpora rather than apply human logic or real understanding.
  • They produce humanlike language but lack higher-level reasoning, causing plausible-sounding errors.
Get the Snipd Podcast app to discover more snips from this episode
Get the app