The Boring AppSec Podcast

The Boring AppSec Podcast
undefined
Feb 2, 2026 • 57min

Security at Scale in a Probabilistic World with Ankur Chakraborty

In this episode, Ankur Chakraborty discusses the evolution of AI security, emphasizing the importance of foundational security principles in the context of generative AI. He explores the challenges of scaling security measures in an era of rapid feature deployment and the necessity of integrating AI tools into security practices. The conversation delves into the balance between human oversight and autonomous systems, the significance of context in security decision-making, and the evaluation of security tools based on their outcomes. The discussion highlights the need for better guardrails and the role of context engineering in enhancing security practices.
undefined
Jan 28, 2026 • 55min

The Future of Identity in AI Agents with Ian Livingstone

Ian Livingstone, CEO and co-founder of Keycard and serial builder focused on developer experience, explores agent identity in the AI era. He discusses non-deterministic AI behavior and why current service accounts fail. Conversation covers fine‑grained, federated permissions, the risks of agents accessing the public web, and how liability, insurance, and engineering practices must evolve.
undefined
Jan 19, 2026 • 50min

Rethinking Enterprise Security in an AI- and Platform-First World with Kane Narraway

In this episode, we sit down with Kane Narraway to unpack how enterprise security is changing as AI, platforms, and developer-driven security become the norm. Kane shares his path from digital forensics to leading security at Canva, and why understanding company culture matters just as much as choosing the right tools.We discuss why modern security is becoming platform-first, why much of the security vendor market optimizes for finding problems rather than fixing them, and why Kane believes security teams need more engineers and fewer manual processes.The conversation also digs into AI security, shadow IT (and shadow AI), and the real-world trade-offs between usability and control, especially as low-code and no-code tools become more common inside companies.Tune in for a deep dive!Connect with Kane Narraway:LinkedIn: https://www.linkedin.com/in/kane-n/Blog: https://kanenarraway.com/Connect with Anshuman:LinkedIn: ⁠⁠⁠⁠anshumanbhartiyaX: ⁠⁠⁠⁠https://x.com/anshuman_bhWebsite: ⁠⁠⁠⁠https://anshumanbhartiya.com/⁠⁠⁠⁠Instagram: anshuman.bhartiyaConnect with Sandesh:LinkedIn: ⁠⁠⁠⁠anandsandeshX: ⁠⁠⁠⁠https://x.com/JubbaOnJeans
undefined
Dec 15, 2025 • 52min

The Future of Developer Security with Travis McPeak

Travis McPeak, a security leader and entrepreneur, discusses the future of developer security, having led initiatives at major companies like Symantec and Netflix. He emphasizes the role of AI in shifting security 'left' and integrating it seamlessly into developer tools. Travis highlights the challenges of compliance in cloud security and how AI can make threat modeling feasible. He also debates the benefits and risks of AI for developers, particularly emphasizing the importance of ownership in using AI-generated code effectively.
undefined
Dec 4, 2025 • 52min

Scaling Product Security In The AI Era with Teja Myneedu

In this episode, we sit down with Teja Myneedu, Sr. Director, Security and Trust at Navan. He shares his philosophy on achieving security at scale, discussing some challenges and approaches specially in the AI era. Teja's career spans over two decades on the front lines of product security at hyper-growth companies like Splunk. He currently operates at the complex intersection of FinTech and corporate travel, where his responsibilities include securing financial transactions and ensuring the physical duty of care for global travelers.Key Takeaways• Scaling Security Philosophy: Security programs should be built on developer empathy and innovative solutions, scaling with context and automation.• Pragmatic Protection: Focus on incremental, practical improvements (like WAF rules) to secure the enterprise immediately, instead of letting the pursuit of perfection delay necessary defenses; security by obscurity is not always bad.• Flawed Prioritization: Prioritization frameworks are often flawed because they lack organizational and business context, which security tools fail to provide.• AI and Code Fixes: AI is changing the application security field by reducing the cognitive load on engineers and making it easier for security teams to propose vulnerability fixes (PRs).• The Authorization Dilemma: The biggest novel threat introduced by LLMs is the complexity of identity and authorization, as agents require delegate access and dynamically determine business logic.Tune in for a deep dive!Contacting Teja* LinkedIn: https://www.linkedin.com/in/myneedu/* Company Website: https://www.navan.comContacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Nov 24, 2025 • 46min

Architecting AI Security: Standards and Agentic Systems with Ken Huang

In this episode, we sit down with Ken Huang, a core architect behind modern AI security standards, to discuss the revolutionary challenges posed by agentic AI systems. Ken, who chairs the OWASP AIVSS project and co-chairs the AI safety working groups at the Cloud Security Alliance, breaks down how security professionals are writing the rulebook for a future driven by autonomous agents.Key TakeawaysAIVSS for Non-Deterministic Risk: The OWASP AIVSS project aims to provide a quantitative measure for core agent AI risks by applying an agent AI risk factor on top of CVSS, specifically addressing the autonomy and non-deterministic nature of AI agents.Need for Task-Scoped IAM: Traditional OAuth and SAML are inadequate for agentic systems because they provide coarse-grained, session-scoped access control. New authentication standards must be task-scoped, dynamically removing access once a specific task is complete, and driven by verifying the agent’s intent.A2A Security Requires New Protocols: Agent-to-Agent communication (A2A) introduces security issues beyond traditional API security (like BOLA). New systems must utilize protocols for Agent Capability Discovery and Negotiation—validated by digital signatures—to ensure the trustworthiness and promised quality of service from interacting agents.Goal Manipulation is a Critical Threat: Sophisticated attacks often utilize context engineering to execute goal manipulation against agents. These attacks include gradually shifting an agent's objective (crescendo attack), using prompt injection to force the agent to expose secrets (malicious goal expansion), and forcing endless processing loops (exhaustion loop/denial of wallet).Tune in for a deep dive!Contacting Ken* LinkedIn: https://www.linkedin.com/in/kenhuang8/* Company Website: https://distributedapps.ai/* Substack: https://kenhuangus.substack.com/* Paper (Agent Capability Negotiation and Binding Protocol): https://arxiv.org/abs/2506.13590* Book (Securing AI Agents): https://www.amazon.com/Securing-Agents-Foundations-Frameworks-Real-World/dp/3032021294 * AIVSS: https://aivss.owasp.org/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Oct 1, 2025 • 48min

The Attacker's Perspective on AI Security with Aryaman Behera

In this episode, hosts Sandesh and Anshuman chat with Aryaman Behera, the Co-Founder and CEO of Repello AI. Aryaman shares his unique journey from being a bug bounty hunter and the captain of India's top-ranked CTF team, InfoSec IITR, to becoming the CEO of an AI security startup. The discussion offers a deep dive into the attacker-centric mindset required to secure modern AI applications, which are fundamentally probabilistic and differ greatly from traditional deterministic software. Aryaman explains the technical details behind Repello's platform, which combines automated red teaming (Artemis) with adaptive guardrails (Argus) to create a continuous security feedback loop. The conversation explores the nuanced differences between AI safety and security, the critical role of threat modeling for agentic workflows, and the complex challenges of responsible disclosure for non-deterministic vulnerabilities.Key Takeaways- From Hacker to CEO: Aryaman discusses the transition from an attacker's mindset, focused on quick exploits, to a CEO's mindset, which requires patience and long-term relationship building with customers.- A New Kind of Threat: AI applications introduce a new attack surface built on prompts, knowledge bases, and probabilistic models, which increases the blast radius of potential security breaches compared to traditional software.- Automated Red Teaming and Defense: Repello’s platform consists of two core products: Artemis, an offensive AI red teaming platform that discovers failure modes , and - Argus, a defensive guardrail system. The platforms create a continuous feedback loop where vulnerabilities found by Artemis are used to calibrate and create policies for Argus.- Threat Modeling for AI Agents: For complex agentic systems, a black-box approach is often insufficient. Repello uses a gray-box method where a tool called AgentWiz helps customers generate a threat model based on the agent's workflow and capabilities, without needing access to the source code.- The Challenge of Non-Deterministic Vulnerabilities: Unlike traditional software vulnerabilities which are deterministic, AI exploits are probabilistic. An attack like a system prompt leak only needs to succeed once to be effective, even if it fails nine out of ten times.- The Future of Attacks is Multimodal: Aryaman predicts that as AI applications evolve, major new attack vectors will emerge from new interfaces like voice and image, as their larger latent space offers more opportunities for malicious embeddings.Tune in for a deep dive!Contacting Aryaman* LinkedIn: https://www.linkedin.com/in/aryaman-behera/* Company Website: https://repello.ai/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Sep 8, 2025 • 52min

From Toil to Intelligence: Brad Geesaman on the Future of AppSec with AI Agents

In this episode, host Anshuman Bhartiya sits down with Brad Geesaman, a Google Cloud Certified Fellow and Principal Security Engineer at Ghost Security, to explore the cutting edge of Application Security. With 22 years in the industry, Brad shares his journey and discusses how his team is leveraging agentic AI and Large Language Models (LLMs) to tackle some of the oldest challenges in AppSec, aiming to shift security from a reactive chore to a proactive, intelligent function. The conversation delves into practical strategies for reducing the "toil" of security tasks, the challenges of working with non-deterministic LLMs, the critical role of context in security testing, and the essential skills the next generation of security engineers must cultivate to succeed in an AI-driven world.Key Takeaways- Reducing AppSec Toil: The primary focus of using AI in AppSec is to reduce repetitive tasks (toil) and surface meaningful risks. With AppSec engineers often outnumbered 100 to 1 by developers, AI can help manage the immense volume of work by automating the process of gathering context and assessing risk for findings from SCA, SAST, and secrets scanning.- Making LLMs More Deterministic: To achieve consistent and high-quality results from non-deterministic LLMs, the key is to use them "as sparingly as possible". Instead of having an LLM manage an entire workflow, break the problem into smaller pieces, use traditional code for deterministic steps, and reserve the LLM for specific tasks like classification or validation where its strengths are best utilized.- The Importance of Evals: Continuous and rigorous evaluations ("evals") are crucial to maintaining quality and consistency in an LLM-powered system. By running a representative dataset against the system every time a change is made—even a small prompt modification—teams can measure the impact and ensure the system's output remains within desired quality boundaries.- Context is Key (CAST): Ghost Security is pioneering Contextual Application Security Testing (CAST), an approach that flips traditional scanning on its head. Instead of finding a pattern and then searching for context, CAST first builds a deep understanding of the application by mapping out call paths, endpoints, authentication, and data handling, and then uses that rich context to ask targeted security questions and run specialized agents.- Prototyping with Frontier vs. Local Models: The typical workflow for prototyping is to first use a powerful frontier model to quickly prove a concept's value. Once validated, the focus shifts to exploring if the same task can be accomplished with smaller, local models to address cost, privacy, and data governance concerns.- The Future Skill for AppSec Engineers: Beyond familiarity with LLMs, the most important skill for the next generation of AppSec engineers is the ability to think in terms of scalable, interoperable systems. The future lies in creating systems that can share context and work together—not just within the AppSec team, but across the entire security organization and with development teams—to build a more cohesive and effective security posture. Tune in for a deep dive into the future of AppSec with AI and AI Agents!Contacting Brad* LinkedIn: https://www.linkedin.com/in/bradgeesaman/* Company Website: https://ghostsecurity.com/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Sep 2, 2025 • 54min

The Future of Autonomous Red Teaming with Ads Dawson

In this episode, we talk to Ads Dawson (Staff AI Security Researcher @ Dreadnode). We discuss the evolving landscape of offensive security in the age of AI. The conversation covers the practical application of AI agents in red teaming, a critical look at industry standards like the OWASP Top 10 for LLMs, and Ad's hands-on approach to building and evaluating autonomous hacking tools. He shares insights from his work industrializing offensive security with AI, his journey as a self-taught professional, and offers advice for others looking to grow in the field.Key Takeaways- AI is a "Force Multiplier," Not a Replacement: Ad emphasizes that AI should be viewed as a productivity tool that enhances the capabilities of human security professionals, allowing them to scale their efforts and tackle more complex tasks. Human expertise remains critical, especially since much of the data used to train AI models originates from human researchers.- Prompt Injection is a Mechanism, Not a Vulnerability: A key insight is that "prompt injection" itself isn't a vulnerability but a method used to deliver an exploit. The discussion highlights a broader critique of security frameworks like the OWASP Top 10, which can sometimes oversimplify complex issues and become compliance checklists rather than practical guides.- Build Offensive Agents with Small, Focused Tasks: When creating offensive AI agents, the most successful approach is to break down the overall objective into small, concise sub-tasks. For example, instead of a single goal to "find XSS," an agent would have separate tasks to log in, identify input fields, and then test those inputs.- Hands-On Learning and Community are Crucial for Growth: As a self-taught professional, Ad advocates for getting deeply involved in the security community through meetups and CTFs. He stresses the importance of hands-on practice—"just play with it"—and curating your information feed by following trusted researchers to cut through the noise and continuously learn.Tune in for a deep dive into the future of security and the innovative approaches shaping the industry!Contacting Ads* Ad's LinkedIn: https://www.linkedin.com/in/adamdawson0/* Ad's website: https://ganggreentempertatum.github.io/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Aug 27, 2025 • 51min

Navigating AI's New Security Landscape with Vineeth Sai

In this episode, we talk to Vineeth Sai Narajala (Senior Security Engineer @ Meta). We discuss the evolving landscape of AI security, focusing on the Model Context Protocol (MCP), Enhanced Tool Definition Interface (ETDI), and the AI Vulnerability Scoring System (AIVSS). We explore the challenges of integrating AI into existing systems, the importance of identity management for AI agents, and the need for standardized security practices. The discussion emphasizes the necessity of adapting security measures to the unique risks posed by generative AI and the collaborative efforts required to establish effective protocols.Key Takeaways- MCP simplifies AI integration but raises security concerns.- Identity management is crucial for AI agents.- ETDI addresses specific vulnerabilities in AI tools.- AIVSS aims to standardize AI vulnerability assessments.- Developers should start with minimal permissions for AI.- Trust in the agent ecosystem is vital for security.- Collaboration is key to developing effective security protocols.- Security fundamentals still apply in AI integration.Tune in for a deep dive into the future of security and the innovative approaches shaping the industry!Contacting Vineeth* Vineeth's LinkedIn: https://www.linkedin.com/in/vineethsai/* Vineeth's website: https://vineethsai.com/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app