Machine Learning Street Talk (MLST)

Gary Marcus' keynote at AGI-24

18 snips
Aug 17, 2024
In this discussion, Gary Marcus, a renowned professor and AI expert, critiques the current state of large language models and generative AI, highlighting their unreliability and tendency to hallucinate. He argues that merely scaling data won't lead us to AGI and proposes a hybrid AI approach that integrates deep learning with symbolic reasoning. Marcus voices concerns about the ethical implications of AI deployment and predicts a potential 'AI winter' due to overhyped technologies and inadequate regulation, emphasizing the necessity for deeper conceptual understanding in AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Deep Learning's Limitations

  • Deep learning excels in routine tasks but struggles with unusual cases.
  • Current AI lacks deep understanding, relying more on frequency than semantics.
ANECDOTE

LLM Failures

  • Large language models (LLMs) still struggle with basic tasks like negation and information integration.
  • Examples include LLMs failing to draw a beach without an elephant and creating inaccurate sightseeing maps.
ANECDOTE

Unreliable LLMs

  • LLMs exhibit unreliability and a lack of common sense.
  • One example is an LLM estimating a train ride from New York to Rome, Italy, to be under five hours.
Get the Snipd Podcast app to discover more snips from this episode
Get the app