AI Security Podcast - ChatGPT and other Generative AI use Large Language Model (LLM) but can these AI systems be attacked? ☠ 🤔 . In this 3 part AI Security series from Cloud Security Podcast Original episode, we're going to talk about the importance of AI security and how to protect your Language Model aka llm program from attack. How can LLMs be attacked by malicious threat actors - beyond the phishing email that everyone has been talking about.
Who is this episode for?
If you work with LLMs used by AI system or working on securing of internal LLM being built; then you would this video helpful in understanding the types of attacks that be used against a LLM.
(00:00) Intro
(00:49) LLM Explained
(01:40) LLM Application Input Prompts
(03:01) Data used by LLM Applications
(04:58) LLM Applications Themselves
(08:15) Infrastructure used to host LLM Application
(11:11) What about Responsive AI
(12:05) Ways to protect LLM Applications against these attacks
(13:00) Useful Resources for AI Security
(13:30) How do you defend against AI Attacks?
(13:38) Outro - Thank you for watching & Subscribing
See you at the next episode!
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.