AI Security Podcast

The Future of AI Security is Scaffolding, Agents & The Browser

23 snips
Sep 9, 2025
In this discussion, Jason Haddix, an offensive security expert from Arcanum, and Daniel Miessler, founder of Unsupervised Learning, dive into the 2025 landscape of AI security. They reveal how LLMs are leaking into broader ecosystems, becoming tools for malicious prompts and exploiting vulnerabilities. The duo highlights the critical yet unsolved problem of prompt injection and the challenges posed by privacy laws on incident response. They emphasize the need for innovative threat modeling and proactive security measures to navigate this evolving danger.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Agent Solved Several CTFs Quickly

  • Jason hooked an open-source model to an offensive agent framework and solved multiple CTFs using RAG and puppeteer/playwright.
  • He compared the agent's output to a junior pen tester in capability.
INSIGHT

Scaffolding Beats Raw Model Power

  • Most AI value comes from the scaffolding and system around models rather than raw model capability.
  • Stitching tools, pipelines and agents produces dependable, real-world results.
INSIGHT

Prompt Injection Is Fundamentally Hard

  • Prompt injection remains fundamentally hard because models are designed to answer and are non-deterministic.
  • Adding guardrails reduces accuracy and slows inference, so prompt injection persists as an open problem.
Get the Snipd Podcast app to discover more snips from this episode
Get the app