AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Attacks on LLM Systems
The first category of attacks that is possible in LLM systems is the LLM application input of prompts. If you Google enough, you'll probably find those prompts as well where you're able to trick the system into giving responses that they're designed not to give. The second kind of attack that can happen under this category is unauthorized code execution. You're also using natural language to create phishing emails and a lot more.