AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Security Implications of Large Language Models in Agentic Systems
The chapter explores the security risks of using large language models (LLMs) in agentic systems, showcasing their vulnerability to external manipulation for both intended and unintended actions. It discusses attacks on open models, transferability to Blackbots models, and the role of open weight models in security research. The conversation also delves into generative AI games focused on exfiltrating data from LLMs, optimizing attacks using invisible strings, and exploiting system tokens to manipulate model responses.