Security, Spoken cover image

The Security Hole at the Heart of ChatGPT and Bing

Security, Spoken

00:00

How to Avoid Indirect Prompt Injection Attacks

An attacker may send emails that contain prompt injection attacks. The race to embed generative AI into products from to-do list apps to Snapchat widens where attacks could happen. If a chat bot is set up to answer questions about information stored in a database, it could cause problems. Prompt injection provides a way for users to override the developers instructions.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app