AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How Much Training Did You Do to Train These Adversarial Policies?
The tasks we were using were all simulated robodics environments. The policies that we were attacking were trained vi self play a to am, yet to win at these zero some games. We think thits probably because self play is only exploring a small amount of a possible space of policies. You can easily find some ofa policy space fut it's not robust. And our adversarial policies were trained for no more than 20 million time steps - which is still a lot in absolute terms.