
Efficient Methods for Natural Language Processing
The Data Exchange with Ben Lorica
00:00
Using 100% of the Data, You're Not Losing That Much
We were looking at how instances training different instances behave during training. This was for downstream tasks. We saw a very small reduction in your end results, so in your end domain and results. So you're not losing that much and you're getting 3x speed. But when we evaluated these models on out of distribution on data sets that they were not seen during training, we actually got the models that perform better than using 100% of the data.
Transcript
Play full episode