The Thesis Review

[34] Sasha Rush - Lagrangian Relaxation for Natural Language Decoding

Oct 20, 2021
Sasha Rush, an Associate Professor at Cornell Tech and a researcher at Hugging Face, delves into the intricacies of Natural Language Processing. He shares insights from his PhD thesis on Lagrangian Relaxation and its relevance today. The conversation touches on the balance between discrete and continuous algorithms, the evolution of coding practices, and the importance of community in open-source innovations. Additionally, they explore navigating depth and breadth in academia and the necessity of risk-taking in research for true innovation.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Discrete vs Continuous in NLP

  • Sasha Rush prefers discrete algorithms, finding the atomic nature of words mathematically interesting.
  • While continuous relaxations are useful tools, discrete phenomena like co-reference are essential in language.
ANECDOTE

Sasha's Path to NLP Research

  • Sasha's background is in computer science, with a focus on algorithms, drawn to NLP through his interest in language.
  • Although initially pursuing software engineering, he returned to research, drawn by longer-term problems and Michael Collins' work.
INSIGHT

Decoding in NLP

  • Decoding, synonymous with MAP inference, aims to find the single best estimate from a model.
  • In NLP, it traditionally focused on inferring hidden structures like parts of speech or parse trees.
Get the Snipd Podcast app to discover more snips from this episode
Get the app