

From Toil to Intelligence: Brad Geesaman on the Future of AppSec with AI Agents
In this episode, host Anshuman Bhartiya sits down with Brad Geesaman, a Google Cloud Certified Fellow and Principal Security Engineer at Ghost Security, to explore the cutting edge of Application Security. With 22 years in the industry, Brad shares his journey and discusses how his team is leveraging agentic AI and Large Language Models (LLMs) to tackle some of the oldest challenges in AppSec, aiming to shift security from a reactive chore to a proactive, intelligent function. The conversation delves into practical strategies for reducing the "toil" of security tasks, the challenges of working with non-deterministic LLMs, the critical role of context in security testing, and the essential skills the next generation of security engineers must cultivate to succeed in an AI-driven world.
Key Takeaways
- Reducing AppSec Toil: The primary focus of using AI in AppSec is to reduce repetitive tasks (toil) and surface meaningful risks. With AppSec engineers often outnumbered 100 to 1 by developers, AI can help manage the immense volume of work by automating the process of gathering context and assessing risk for findings from SCA, SAST, and secrets scanning.
- Making LLMs More Deterministic: To achieve consistent and high-quality results from non-deterministic LLMs, the key is to use them "as sparingly as possible". Instead of having an LLM manage an entire workflow, break the problem into smaller pieces, use traditional code for deterministic steps, and reserve the LLM for specific tasks like classification or validation where its strengths are best utilized.
- The Importance of Evals: Continuous and rigorous evaluations ("evals") are crucial to maintaining quality and consistency in an LLM-powered system. By running a representative dataset against the system every time a change is made—even a small prompt modification—teams can measure the impact and ensure the system's output remains within desired quality boundaries.
- Context is Key (CAST): Ghost Security is pioneering Contextual Application Security Testing (CAST), an approach that flips traditional scanning on its head. Instead of finding a pattern and then searching for context, CAST first builds a deep understanding of the application by mapping out call paths, endpoints, authentication, and data handling, and then uses that rich context to ask targeted security questions and run specialized agents.
- Prototyping with Frontier vs. Local Models: The typical workflow for prototyping is to first use a powerful frontier model to quickly prove a concept's value. Once validated, the focus shifts to exploring if the same task can be accomplished with smaller, local models to address cost, privacy, and data governance concerns.
- The Future Skill for AppSec Engineers: Beyond familiarity with LLMs, the most important skill for the next generation of AppSec engineers is the ability to think in terms of scalable, interoperable systems. The future lies in creating systems that can share context and work together—not just within the AppSec team, but across the entire security organization and with development teams—to build a more cohesive and effective security posture. Tune in for a deep dive into the future of AppSec with AI and AI Agents!
Contacting Brad
* LinkedIn: https://www.linkedin.com/in/bradgeesaman/
* Company Website: https://ghostsecurity.com/
Contacting Anshuman
* LinkedIn: https://www.linkedin.com/in/anshumanbhartiya/
* X: https://x.com/anshuman_bh
* Website: https://anshumanbhartiya.com/
* Instagram: https://www.instagram.com/anshuman.bhartiya
Contacting Sandesh
* LinkedIn: https://www.linkedin.com/in/anandsandesh/
* X: https://x.com/JubbaOnJeans
* Website: https://boringappsec.substack.com/