18min chapter

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

CHAPTER

Security Implications of Large Language Models in Agentic Systems

The chapter explores the security risks of using large language models (LLMs) in agentic systems, showcasing their vulnerability to external manipulation for both intended and unintended actions. It discusses attacks on open models, transferability to Blackbots models, and the role of open weight models in security research. The conversation also delves into generative AI games focused on exfiltrating data from LLMs, optimizing attacks using invisible strings, and exploiting system tokens to manipulate model responses.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode