

Trends in Natural Language Processing with Nasrin Mostafazadeh - #337
Jan 9, 2020
In this engaging discussion, Nasrin Mostafazadeh, a Senior AI Research Scientist at Elemental Cognition, shares her insights on the evolution of Natural Language Processing (NLP). She highlights the transformative impact of large pre-trained models like BERT and GPT-2. Nasrin dives into the ethical implications of AI, including bias and accessibility, and stresses the importance of interpretability in AI systems. The conversation also touches on the challenges of AI in educational assessments and aims to enhance common sense reasoning within NLP.
AI Snips
Chapters
Transcript
Episode notes
2019 NLP Trends
- Large pre-trained models have transformed NLP research, enabling breakthroughs in various tasks.
- Researchers began focusing on model weaknesses like bias and interpretability.
Resource Intensive Models
- Building large language models requires expensive resources, creating a barrier for many researchers.
- These models also have substantial environmental impacts due to high energy consumption.
Attention is Not Explanation
- Attention mechanisms in neural networks do not reliably explain model predictions.
- Researchers debate the nature and definition of "explanation" in AI models.