Machine Learning Street Talk (MLST)

#68 DR. WALID SABA 2.0 - Natural Language Understanding [UNPLUGGED]

Mar 7, 2022
Dr. Walid Saba, a Senior Scientist at Sorcero, critiques deep learning's approach to natural language understanding. He argues that reliance on statistical learning leads to failure, akin to memorizing infinity. Saba emphasizes the importance of symbolic logic and human cognitive processes in AI development. He explores the complexities of memory in neural networks, the distinctions between top-down and bottom-up problem-solving, and the need for hybrid models that integrate logic and prior knowledge. His insights challenge conventional methods and advocate for a deeper understanding of cognition in AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Trophy Suitcase Problem

  • Walid Saba presented a linguistic problem involving two sentences about a trophy and a suitcase.
  • Humans understand the context, but statistical models like neural networks might struggle due to similar word frequencies.
INSIGHT

NLU's Combinatorial Explosion

  • Statistical significance of words like "small" and "big" is not the main issue in NLU.
  • The real challenge lies in the vast number of sentence templates and their combinatorial instantiations.
INSIGHT

Augmented Machine Learning

  • Machine learning's success is often overstated, relying heavily on human engineering and prior knowledge.
  • Fully connected neural networks, without specific architectures, struggle with tasks like image recognition.
Get the Snipd Podcast app to discover more snips from this episode
Get the app