
The Security Hole at the Heart of ChatGPT and Bing
Security, Spoken
00:00
How to Avoid Indirect Prompt Injection Attacks
An attacker may send emails that contain prompt injection attacks. The race to embed generative AI into products from to-do list apps to Snapchat widens where attacks could happen. If a chat bot is set up to answer questions about information stored in a database, it could cause problems. Prompt injection provides a way for users to override the developers instructions.
Transcript
Play full episode