AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is Data Augmentation Really Useful for Non-Contrastive Learning?
The standard scenario, which a lot of people working in this area are using, is you use the type of distortion. So one basically just shifts the image a little bit, it's called crapping. Another one kind of changes the scale a little bit. Another one changes the colors. Saturation, another one sort of blurs it, another one has noise. And so you train with those distortions and then you chop off the last layer or couple layers of the network and you use the representation as input to a classifier. You train the classifier on image net let's say, or whatever, and measure the performance.