

When AI gets a to-do list. [Research Saturday]
May 3, 2025
Shaked Reiner, Security Principal Security Researcher at CyberArk, dives into the intriguing realm of Agentic AI and its security challenges. He elaborates on how these AI systems can perform autonomous tasks, but also become potential threats through vulnerabilities like prompt injections. Shaked emphasizes treating agent outputs as untrusted code to mitigate risks. The conversation also touches on the vital need for monitoring, auditing, and innovative security strategies to keep pace with the rapidly evolving landscape of AI threats.
AI Snips
Chapters
Transcript
Episode notes
What Is Agentic AI?
- Agentic AI lets large language models autonomously control program flow and perform real-world actions.
- This autonomy makes agentic AI systems more useful but introduces greater security risks compared to traditional LLMs.
Agentic AI Threat Landscape
- Agentic AI systems face traditional access attacks plus new LLM-specific attack surfaces like prompt injection.
- These LLM-based attacks manipulate agent behavior beyond intended functions, increasing security concerns.
Identity Challenges with AI Agents
- How to identity AI agents poses a challenge: are they users, machines, or bots?
- They require permissions like access tokens and accounts, complicating traditional identity and access management.