Larger, more capable language models (LLMs) inherently possess a greater attack surface, which makes them more vulnerable to exploitation through techniques like jailbreak and prompt injection. Effective alignment of these models is challenging, as the reinforcement learning from human feedback only covers a limited part of the expansive operational space. When presented with inputs outside this narrow training distribution, the unpredictable behavior of the LLM poses significant risks for alignment efforts.
If you have questions at the intersection of Cybersecurity and AI, you need to know Donato at WithSecure! Donato has been threat modeling AI applications and seriously applying those models in his day-to-day work. He joins us in this episode to discuss his LLM application security canvas, prompt injections, alignment, and more.
Leave us a comment
Changelog++ members save 9 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
- Assembly AI – Turn voice data into summaries with AssemblyAI’s leading Speech AI models. Built by AI experts, their Speech AI models include accurate speech-to-text for voice data (such as calls, virtual meetings, and podcasts), speaker detection, sentiment analysis, chapter detection, PII redaction, and more.
- Porkbun – Go to porkbun.com to get .app, .dev, or .foo domain names at Porkbun for only $1 for the first year!
- Changelog News – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today.
Featuring:
Show Notes:
Something missing or broken? PRs welcome!