Changelog Master Feed cover image

Building world-class developer experiences (Go Time #287)

Changelog Master Feed

00:00

The Safety of LLM Prompt Injection

New research has uncovered some new LLM attacks on the block. Prompt injection is where you handcraft a prompt that tricks a chat bot into not following its own rules. The biggest difference here is that they're achieving the jailbreak in an entirely automated fashion and make a case for the possibility that such behavior may never be fully patchable by LLM providers. You just heard one of our five top stories from Monday's changelog news.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app