Cloud Security Podcast cover image

AI Security - Can LLM be Attacked?

Cloud Security Podcast

00:00

The Attacks on LLM Systems

The first category of attacks that is possible in LLM systems is the LLM application input of prompts. If you Google enough, you'll probably find those prompts as well where you're able to trick the system into giving responses that they're designed not to give. The second kind of attack that can happen under this category is unauthorized code execution. You're also using natural language to create phishing emails and a lot more.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app