AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Why Humans Can't Tell the Difference?
In another study that i sharea it's preliminary result, but this is, in that work, we think about explanation of the saliency maps methods. And perhaps the other way around is true too. What if there's information that human can tell, but machines can't? It turns out that if if i train a model with the same data, same architecture, but just different seeds. So now i have two models that roughly achieve the same accuracy, but just literally different seeds and different weights. When i get explanations, humans can't tell the difference. They look the same. Like i looked at many, many of those same but for neara network, it can tell which,