AXRP - the AI X-risk Research Podcast cover image

21 - Interpretability for Engineers with Stephen Casper

AXRP - the AI X-risk Research Podcast

00:00

How to Make a Model More Robust to Adversarial Inputs

The other would be mechanistic adversarial training or latent adversarial training. So really concretely imagine that the system that's going to destroy the world once it sees the factors of RSA 2048. Just like mechanistic interpretability can help you find this neuron, latent adversarialtraining could give you perturbations to the model internals. It just might be a lot easier to make models hallucinate that they want to just undo something bad than to make them actually find triggers for bad behavior in the input space.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app