Security, Spoken cover image

The Security Hole at the Heart of ChatGPT and Bing

Security, Spoken

00:00

The Threat of Indirect Prompt Injections

Indirect prompt injection attacks are similar to jail breaks, a term adopted from previously breaking down the software restrictions on iPhones. Prompt injection is easier to exploit or has less requirements to be successfully exploited than other types of attacks against machine learning or AI systems. There's been a steady uptick of security researchers and technologists poking holes in LLMs.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app