AXRP - the AI X-risk Research Podcast cover image

1 - Adversarial Policies with Adam Gleave

AXRP - the AI X-risk Research Podcast

00:00

How Much Training Did You Do to Train These Adversarial Policies?

The tasks we were using were all simulated robodics environments. The policies that we were attacking were trained vi self play a to am, yet to win at these zero some games. We think thits probably because self play is only exploring a small amount of a possible space of policies. You can easily find some ofa policy space fut it's not robust. And our adversarial policies were trained for no more than 20 million time steps - which is still a lot in absolute terms.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner