AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Developing Confidence in Model Explainability and Adversarial Examples
The idea of adversarial examples is that a neural network can be trained to classify an image as an object. By changing one little tiny detail about the image, often imperceptible to the human eye, the neural network is completely confident it is not what the image is. But this balace of opaqueness and fragility of a neural network would almost require a higher degree of confidence to use it.