Critical Thinking - Bug Bounty Podcast

Episode 152: GeminiJack and Agentic Security with Sasi Levi

Dec 11, 2025
Sasi Levi, a security researcher at Noma Security with a focus on AI and agentic security, shares his insights on cutting-edge vulnerabilities. He dives into the Google Vertex AI bug he discovered, revealing how it accessed confidential employee data. Sasi explains the mechanics of prompt injection and its implications, and discusses his innovative techniques for testing AI responses through documents. He also highlights his journey as a bug bounty hunter and the challenges facing security in AI applications.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

RAG Can Pull Sensitive Enterprise Context

  • Sasi discovered Vertex AI could pull enterprise data across Gmail, Docs and Calendar into model context.
  • The model used that context to answer and escalate queries, revealing internal data without user notice.
ANECDOTE

Calendar Event Became An Execution Context

  • Sasi put a benign question into a Calendar event and then queried Vertex; Gemini returned the calendar content including the injected instruction.
  • That proved the model treated injected event text as operational context and executed it.
ANECDOTE

Indirect Prompt Injection Used Image Exfiltration

  • Sasi built an indirect prompt that asked Gemini to include an answer into a parameter X and append an image URL; the generated image request contained internal data.
  • He encoded spaces and used a crafted image tag to exfiltrate the information via an HTTP request to his server.
Get the Snipd Podcast app to discover more snips from this episode
Get the app