Partially Redacted: Data, AI, Security, and Privacy

Prompt Injection Attacks with SVAM's Devansh

Mar 27, 2024
Devansh, AI Solutions Lead at SVAM, discusses prompt injection attacks in Large Language Models (LLMs), vulnerabilities, real-world examples (like extraction of training data), attack strategies (leaking prompts, subverting app's purpose), motives behind attacks, consequences of successful attacks. The podcast covers bridging the gap in AI concepts, navigating prompt injection attacks, model vulnerabilities, privacy concerns in LLMs, and concludes with contact information for the guest.
Ask episode
Chapters
Transcript
Episode notes