AI Safety Fundamentals: Alignment cover image

High-Stakes Alignment via Adversarial Training [Redwood Research Report]

AI Safety Fundamentals: Alignment

00:00

Building a Classifier to Detect Injuries

We focused on building a classifier to reliably detect injuries. We then iteratively attacked our classifier using un-augmented humans, automatically paraphrased previous adversarial examples, and tool assisted human rewrites while training on the resulting adversarial samples. Our final attack was a tool assisted rewriting process where we built tools powered by language models to help our contractors find classifier failures.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app