AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is There a Limitation on the Sets of Transformations?
In your paper you did benchmark a bunch of different transformations, at least for images. Is there some sort of like limitation performance wise in terms of how much you can compose with these and generate these kind of like aggregations versus this kind of like commit hash plus sequence of transformations? Does that kind of make sense? It does seem like there's some trade off surface here and I'm just kind of understanding kind of what it is.