
17 - Training for Very High Reliability with Daniel Ziegler
AXRP - the AI X-risk Research Podcast
Is There a Problem With Token Substitution?
The jusenovo human gen examples were written using existing violent snippets. The job of the humans was to rewrite them so that it's still violent, but the classifier doesn't detect it. They used tools like a token substitution tool and a saliency mapping technique in order to decide which tokens to consider replacing. But i'm wondering if there's some danger in just looking at variations of these violent things rather than just anything you can think of.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.