AI Safety Fundamentals: Alignment cover image

High-Stakes Alignment via Adversarial Training [Redwood Research Report]

AI Safety Fundamentals: Alignment

00:00

How to Use a ToolAssisted Attack to Improve Your Classifier Score

There's a token substitution tool, where every single one of the tokens is being color coded. Once highlighted in yellow are likely to have more impact on the classifier. You can click on a token and see alternative suggestions. We made our classifier conservative enough to reject over half the proposed completions. Adversarial training improved adversarial robustness. Our tool assisted attack seems quite strong.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app