AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Impossibility of Being Adversarily Robust in AI
In the AI realm, when I hear that something is vulnerable to adversaries, I'm really worried because I'm like, well, what if my AI is an adversary? So do you think that this worry applies in the heuristic arguments case? Yeah. It's basically impossible to be adversarially robust. For any quantity with sufficient noise, even if you expect very strongly the noise to average out to zero, there will exist a heuristic argument that only points out the positive values of the noise and drive you down. And so I think this is naively quite a big problem forlike the entire heuristic argument paradigm.