
123 - Robust NLP, with Robin Jia
NLP Highlights
00:00
How to Preserve Labels in a Training Data Set
The training data set is like a big list of possible substitutions. And then at test time, you have basically similar lists for everything in the, every example in the test set. So how do we know that the model is more robust given that the train and the test sets sort of in a sense more similar?Yeah, no, that's, I think that's like the million dollar question about like, how do you go from these settings where, yeah, like it's, right, it's, so this is just, robust in one sense. It doesn't at all imply that it is robust in other senses.
Transcript
Play full episode