Speaker 2
Totally make sense. The only cavia would be if the output in any way resembles the original paintings, meaning, like a bit for bit cond of copy, like iserly. Is there any of that going on here? Meaning, like, let's just say we take a look at one of these paintings. And, ok, tanjarine sun over that you have here, beautiful clouds, like, fantastic. I mean, this as this could easily have just been painted by a human. This s beautiful painting. Are any part of those clouds? Or could they be
Speaker 3
identical to something that it saw somewhere else? Or is that just mathematically impossible? So the way that i believe i were everyting to work is because the data set,
Speaker 1
if it would be one thing, if the data set, if every image in the datus had had the exact same cloudsthe the outputs would have those exact clouds. So then that would be a bit more questionable. But when the data set is such such a huge amount of images in different styles and pictures and of clouds and without clouds, then what ends up happening is that those outputs end up not not just being like, like you said, bit for bit copies, it ends up being pretty much what what thean does, which is, like i was saying, the generative adversarial network. All it does is it tries to create o its own output that can trick itself into thinking, ok, this output could have plausibly been from the input dayto set. Keeps doing that, and it almost plays this game with itself, whereon part of it is creating pictures, and now a different part of it is isla, is going through outputs from that that the other side of it created and looking at once from the dayto set. And it's pretty much gr ra grabbing them at random and saying, is this from the input data set? And or is this one from from the from the output that it just created? And it pretty much keeps doing that until it's it's able to convince itself that the outputs its thinking, this one's from the input data set.