AXRP - the AI X-risk Research Podcast cover image

1 - Adversarial Policies with Adam Gleave

AXRP - the AI X-risk Research Podcast

00:00

Is There a Human Level Classification Accuracy on Natural Images?

There are a lot of people whose opinions i respect, who think that adversary for examples will just disappear once we have at human level classification accuracy on natural images. And you can point to humans seeming not to suffer from adverse aral examples very much. But it definitely does seem likely that ygow no sufficiently high dimensional space, i is just going to be impossible to cover every area. Vewe robust ad by visaliso.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner