AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Using 100% of the Data, You're Not Losing That Much
We were looking at how instances training different instances behave during training. This was for downstream tasks. We saw a very small reduction in your end results, so in your end domain and results. So you're not losing that much and you're getting 3x speed. But when we evaluated these models on out of distribution on data sets that they were not seen during training, we actually got the models that perform better than using 100% of the data.