Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Introduction
00:00 • 2min
Cogs, a Compositional Generalization Challenge Based on Semantic Interpretation
02:19 • 3min
The Difference Between CFQ and Cog's (Compositional-Free-Based Questions)
05:22 • 3min
Cog's Uses Semantic Parsing
08:18 • 2min
The Convenience of Semantic Parsing
10:13 • 3min
Is There a Problem With Semantic Parsing?
12:46 • 3min
What Model Study Evaluate in the Ocean Paper?
15:45 • 2min
What Are the High Level Trends in the Results?
17:42 • 3min
Structure to Sequence Models Without Structural Priors Don't Do Well on Generalization
21:09 • 4min
Is It a Good Idea to Use Bigger Models for COG's?
25:32 • 3min
Pre-Trained Models Are Not Doing So Well on the Lexical Portion of the COGS Task?
28:48 • 1min
How to Evaluate a CogS Model on a Pre-Trained Model?
30:08 • 3min
Is There a Con Found in the Training Data?
32:55 • 2min
Pre-Trained Models - What's the Second Setup?
35:18 • 2min
Is There a Way to Fix the Outlier Issue?
37:32 • 3min
The Average Embedding Side Didn't Work, Right?
40:33 • 3min
Is There Anything Else About This Book That You'd Like to Talk About?
44:01 • 2min
Is There a Single Good Way to Do Evaluation?
45:51 • 3min