AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is There a Problem With Token Substitution?
The jusenovo human gen examples were written using existing violent snippets. The job of the humans was to rewrite them so that it's still violent, but the classifier doesn't detect it. They used tools like a token substitution tool and a saliency mapping technique in order to decide which tokens to consider replacing. But i'm wondering if there's some danger in just looking at variations of these violent things rather than just anything you can think of.