No Priors AI cover image

Researchers Expose "Adversarial Poetry" AI Jailbreak Flaw

No Priors AI

00:00

Researchers Reveal 'Adversarial Poetry' Flaw

Jeremy summarizes research showing poetic prompts can bypass AI safety filters to elicit dangerous instructions.

Play episode from 01:14
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app