NLP Highlights

120 - Evaluation of Text Generation, with Asli Celikyilmaz

Oct 3, 2020
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 5min
2
The Challenges of Pushing Neural NLG Systems in Production
05:14 • 1min
3
The Challenges of Evaluating Text Generation Systems
06:30 • 4min
4
The Cost of Human Evaluation in Text Generation
10:08 • 5min
5
How to Evaluate a Model Based on Tasks
14:57 • 2min
6
Extrinsic Evaluations for Voice-Enabled Personal Assistance
17:03 • 3min
7
How to Evaluate a Machine Learning Model
19:39 • 2min
8
The Disadvantages of Inter-Annotator Agreement
21:22 • 4min
9
Should You Push for High Inter-Entered Agreements?
24:57 • 2min
10
The Importance of Automatic Metrics in Text Generations
27:20 • 2min
11
The Importance of Quality Metrics in Natural Language Generation
29:35 • 3min
12
The Importance of Morphings in Machine Translation
32:16 • 2min
13
The Limits of Machine Translation
34:17 • 2min
14
The Differences Between Blue and NLTK
36:08 • 2min
15
How to Use Multiple References in Summarization Tasks
38:21 • 3min
16
The Role of Rouge in Summarization
41:26 • 2min
17
The Limitations of Learning Metrics for Text Generation
43:48 • 4min
18
The Importance of Factual Consistency Measures in Language Generation
47:26 • 5min
19
The Challenges in Evaluating Extenuation Systems
52:47 • 2min