The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Common Sense Reasoning in NLP with Vered Shwartz - #461

Mar 4, 2021
Vered Shwartz, a Postdoctoral Researcher at the Allen Institute for AI and the University of Washington, dives deep into common sense reasoning in natural language processing. She shares insights on training neural networks, the challenges of integrating common sense knowledge, and the innovative 'self-talk' model that enhances contextual understanding. Vered also discusses biases in language models stemming from training data and explores multimodal reasoning, aiming to improve AI's grasp on human-like logic and communication.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Vered's NLP Journey

  • Vered Shwartz's interest in NLP began with two courses during her undergrad.
  • This led to a master's, then a PhD, fueled by a passion for research.
INSIGHT

Lexical Semantics Research

  • Vered's thesis explored lexical semantics, focusing on how words relate to each other.
  • She used knowledge bases and neural networks to discover these relationships.
INSIGHT

Common Sense Reasoning

  • Vered's research at AI2 aims to improve common sense reasoning in NLP models.
  • She believes common sense can make models more robust in real-world scenarios.
Get the Snipd Podcast app to discover more snips from this episode
Get the app