AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Safety of LLM Prompt Injection
New research has uncovered some new LLM attacks on the block. Prompt injection is where you handcraft a prompt that tricks a chat bot into not following its own rules. The biggest difference here is that they're achieving the jailbreak in an entirely automated fashion and make a case for the possibility that such behavior may never be fully patchable by LLM providers. You just heard one of our five top stories from Monday's changelog news.