The assessment of security regarding tools like LLMs should focus on how they are utilized rather than the inherent safety of the tools themselves. Just as a knife can cause harm if misused, an LLM also carries risks if not applied appropriately to specific use cases. The priority should be identifying the problem being solved and ensuring that the interaction with the tool is secure within the intended context, rather than questioning the safety of the tool in isolation.
If you have questions at the intersection of Cybersecurity and AI, you need to know Donato at WithSecure! Donato has been threat modeling AI applications and seriously applying those models in his day-to-day work. He joins us in this episode to discuss his LLM application security canvas, prompt injections, alignment, and more.
Leave us a comment
Changelog++ members save 9 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
- Assembly AI – Turn voice data into summaries with AssemblyAI’s leading Speech AI models. Built by AI experts, their Speech AI models include accurate speech-to-text for voice data (such as calls, virtual meetings, and podcasts), speaker detection, sentiment analysis, chapter detection, PII redaction, and more.
- Porkbun – Go to porkbun.com to get .app, .dev, or .foo domain names at Porkbun for only $1 for the first year!
- Changelog News – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today.
Featuring:
Show Notes:
Something missing or broken? PRs welcome!