Cloud Security Podcast by Google cover image

EP68 How We Attack AI? Learn More at Our RSA Panel!

Cloud Security Podcast by Google

00:00

Machine Learning Models Are Robust to Noise in the Labels

There is a big difference between average case and worse case robustness or security. In machine learning, we can have systems which are very resilient to noise in the labels. So that data might just be incorrectly labelled randomly, and i behave perfectly fine. As long as you label it wrong randomly, the model becomes slightly less confident on all other data,. but it's not any worse. It's like, you know, it's very, very good in the random i setting. It's very,very bad in the worst case sort of setting here. This is an example of something that's legitimately unique to aMachine Learning system. I don't know if anyone else has ever thought about this

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app