AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Multi Gpus
Andi: There's a constraint on the accuracy that your trained model is supposed to achieve. We realized if we actually just used 64 by 64 images, it trained a pretty good model. And then we could take that same model and just give it a couple of epochs to learn two 24 b two 24 images, and it was basically already trained. He says multi-gpu training in general has become less clunky but anything that slows down iteration speed is a waste of time. "Why test things on one point three million images? Most of us don't use one point 3 million images"