Cloud Security Podcast cover image

AI Security - Can LLM be Attacked?

Cloud Security Podcast

CHAPTER

The Attacks on LLM Systems

The first category of attacks that is possible in LLM systems is the LLM application input of prompts. If you Google enough, you'll probably find those prompts as well where you're able to trick the system into giving responses that they're designed not to give. The second kind of attack that can happen under this category is unauthorized code execution. You're also using natural language to create phishing emails and a lot more.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner