The Boring AppSec Podcast

The Boring AppSec Podcast
undefined
Oct 1, 2025 • 48min

The Attacker's Perspective on AI Security with Aryaman Behera

In this episode, hosts Sandesh and Anshuman chat with Aryaman Behera, the Co-Founder and CEO of Repello AI. Aryaman shares his unique journey from being a bug bounty hunter and the captain of India's top-ranked CTF team, InfoSec IITR, to becoming the CEO of an AI security startup. The discussion offers a deep dive into the attacker-centric mindset required to secure modern AI applications, which are fundamentally probabilistic and differ greatly from traditional deterministic software. Aryaman explains the technical details behind Repello's platform, which combines automated red teaming (Artemis) with adaptive guardrails (Argus) to create a continuous security feedback loop. The conversation explores the nuanced differences between AI safety and security, the critical role of threat modeling for agentic workflows, and the complex challenges of responsible disclosure for non-deterministic vulnerabilities.Key Takeaways- From Hacker to CEO: Aryaman discusses the transition from an attacker's mindset, focused on quick exploits, to a CEO's mindset, which requires patience and long-term relationship building with customers.- A New Kind of Threat: AI applications introduce a new attack surface built on prompts, knowledge bases, and probabilistic models, which increases the blast radius of potential security breaches compared to traditional software.- Automated Red Teaming and Defense: Repello’s platform consists of two core products: Artemis, an offensive AI red teaming platform that discovers failure modes , and - Argus, a defensive guardrail system. The platforms create a continuous feedback loop where vulnerabilities found by Artemis are used to calibrate and create policies for Argus.- Threat Modeling for AI Agents: For complex agentic systems, a black-box approach is often insufficient. Repello uses a gray-box method where a tool called AgentWiz helps customers generate a threat model based on the agent's workflow and capabilities, without needing access to the source code.- The Challenge of Non-Deterministic Vulnerabilities: Unlike traditional software vulnerabilities which are deterministic, AI exploits are probabilistic. An attack like a system prompt leak only needs to succeed once to be effective, even if it fails nine out of ten times.- The Future of Attacks is Multimodal: Aryaman predicts that as AI applications evolve, major new attack vectors will emerge from new interfaces like voice and image, as their larger latent space offers more opportunities for malicious embeddings.Tune in for a deep dive!Contacting Aryaman* LinkedIn: https://www.linkedin.com/in/aryaman-behera/* Company Website: https://repello.ai/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Sep 8, 2025 • 52min

From Toil to Intelligence: Brad Geesaman on the Future of AppSec with AI Agents

In this episode, host Anshuman Bhartiya sits down with Brad Geesaman, a Google Cloud Certified Fellow and Principal Security Engineer at Ghost Security, to explore the cutting edge of Application Security. With 22 years in the industry, Brad shares his journey and discusses how his team is leveraging agentic AI and Large Language Models (LLMs) to tackle some of the oldest challenges in AppSec, aiming to shift security from a reactive chore to a proactive, intelligent function. The conversation delves into practical strategies for reducing the "toil" of security tasks, the challenges of working with non-deterministic LLMs, the critical role of context in security testing, and the essential skills the next generation of security engineers must cultivate to succeed in an AI-driven world.Key Takeaways- Reducing AppSec Toil: The primary focus of using AI in AppSec is to reduce repetitive tasks (toil) and surface meaningful risks. With AppSec engineers often outnumbered 100 to 1 by developers, AI can help manage the immense volume of work by automating the process of gathering context and assessing risk for findings from SCA, SAST, and secrets scanning.- Making LLMs More Deterministic: To achieve consistent and high-quality results from non-deterministic LLMs, the key is to use them "as sparingly as possible". Instead of having an LLM manage an entire workflow, break the problem into smaller pieces, use traditional code for deterministic steps, and reserve the LLM for specific tasks like classification or validation where its strengths are best utilized.- The Importance of Evals: Continuous and rigorous evaluations ("evals") are crucial to maintaining quality and consistency in an LLM-powered system. By running a representative dataset against the system every time a change is made—even a small prompt modification—teams can measure the impact and ensure the system's output remains within desired quality boundaries.- Context is Key (CAST): Ghost Security is pioneering Contextual Application Security Testing (CAST), an approach that flips traditional scanning on its head. Instead of finding a pattern and then searching for context, CAST first builds a deep understanding of the application by mapping out call paths, endpoints, authentication, and data handling, and then uses that rich context to ask targeted security questions and run specialized agents.- Prototyping with Frontier vs. Local Models: The typical workflow for prototyping is to first use a powerful frontier model to quickly prove a concept's value. Once validated, the focus shifts to exploring if the same task can be accomplished with smaller, local models to address cost, privacy, and data governance concerns.- The Future Skill for AppSec Engineers: Beyond familiarity with LLMs, the most important skill for the next generation of AppSec engineers is the ability to think in terms of scalable, interoperable systems. The future lies in creating systems that can share context and work together—not just within the AppSec team, but across the entire security organization and with development teams—to build a more cohesive and effective security posture. Tune in for a deep dive into the future of AppSec with AI and AI Agents!Contacting Brad* LinkedIn: https://www.linkedin.com/in/bradgeesaman/* Company Website: https://ghostsecurity.com/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Sep 2, 2025 • 54min

The Future of Autonomous Red Teaming with Ads Dawson

In this episode, we talk to Ads Dawson (Staff AI Security Researcher @ Dreadnode). We discuss the evolving landscape of offensive security in the age of AI. The conversation covers the practical application of AI agents in red teaming, a critical look at industry standards like the OWASP Top 10 for LLMs, and Ad's hands-on approach to building and evaluating autonomous hacking tools. He shares insights from his work industrializing offensive security with AI, his journey as a self-taught professional, and offers advice for others looking to grow in the field.Key Takeaways- AI is a "Force Multiplier," Not a Replacement: Ad emphasizes that AI should be viewed as a productivity tool that enhances the capabilities of human security professionals, allowing them to scale their efforts and tackle more complex tasks. Human expertise remains critical, especially since much of the data used to train AI models originates from human researchers.- Prompt Injection is a Mechanism, Not a Vulnerability: A key insight is that "prompt injection" itself isn't a vulnerability but a method used to deliver an exploit. The discussion highlights a broader critique of security frameworks like the OWASP Top 10, which can sometimes oversimplify complex issues and become compliance checklists rather than practical guides.- Build Offensive Agents with Small, Focused Tasks: When creating offensive AI agents, the most successful approach is to break down the overall objective into small, concise sub-tasks. For example, instead of a single goal to "find XSS," an agent would have separate tasks to log in, identify input fields, and then test those inputs.- Hands-On Learning and Community are Crucial for Growth: As a self-taught professional, Ad advocates for getting deeply involved in the security community through meetups and CTFs. He stresses the importance of hands-on practice—"just play with it"—and curating your information feed by following trusted researchers to cut through the noise and continuously learn.Tune in for a deep dive into the future of security and the innovative approaches shaping the industry!Contacting Ads* Ad's LinkedIn: https://www.linkedin.com/in/adamdawson0/* Ad's website: https://ganggreentempertatum.github.io/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Aug 27, 2025 • 51min

Navigating AI's New Security Landscape with Vineeth Sai

In this episode, we talk to Vineeth Sai Narajala (Senior Security Engineer @ Meta). We discuss the evolving landscape of AI security, focusing on the Model Context Protocol (MCP), Enhanced Tool Definition Interface (ETDI), and the AI Vulnerability Scoring System (AIVSS). We explore the challenges of integrating AI into existing systems, the importance of identity management for AI agents, and the need for standardized security practices. The discussion emphasizes the necessity of adapting security measures to the unique risks posed by generative AI and the collaborative efforts required to establish effective protocols.Key Takeaways- MCP simplifies AI integration but raises security concerns.- Identity management is crucial for AI agents.- ETDI addresses specific vulnerabilities in AI tools.- AIVSS aims to standardize AI vulnerability assessments.- Developers should start with minimal permissions for AI.- Trust in the agent ecosystem is vital for security.- Collaboration is key to developing effective security protocols.- Security fundamentals still apply in AI integration.Tune in for a deep dive into the future of security and the innovative approaches shaping the industry!Contacting Vineeth* Vineeth's LinkedIn: https://www.linkedin.com/in/vineethsai/* Vineeth's website: https://vineethsai.com/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Jul 31, 2025 • 48min

Agentic AI: Transforming Vulnerability Management with Harry Wetherald

Harry Wetherald, Co-Founder and CEO of Maze, shares his expertise in AI and machine learning, particularly in the realm of vulnerability management. He delves into the concept of agentic AI, which allows AI to independently analyze vulnerabilities, massively enhancing efficiency. The conversation highlights the critical need for context engineering to tailor AI solutions for diverse organizations. Harry also discusses the hurdles of achieving reliable AI systems and emphasizes the importance of clear pricing strategies to improve customer experience and budget predictability.
undefined
Jul 23, 2025 • 57min

Surag Patel and Arshan Dabirsiaghi

In this episode, we talk to Surag Patel (CEO @ Pixee) and Arshan Dabirsiaghi (CTO @ Pixee). We discuss the transformative approach that Pixee is taking in application security. We explore the shift from traditional security tools that merely detect vulnerabilities to a model that emphasizes automated remediation. The discussion covers the evolving role of AppSec professionals, the integration of AI agents to scale coverage, the importance of trust in automated fixes, and the challenges of navigating a crowded security market. We also touch on the future of security in design specifications and the need for a comprehensive approach to security that includes all stakeholders in the software development lifecycle.Key Takeaways- The traditional model of security tools is being challenged.- Pixee aims to automate not just detection but also remediation.- AI agents can help scale coverage in application security.- The role of AppSec professionals will evolve with AI integration.- Trust is crucial for developers to accept automated fixes.- Developers want tools that reduce their workload, not add to it.- Contextual understanding is key for accurate vulnerability triage.- The security market is not saturated; there are still many unsolved problems.- Integrating security into design specifications is the future.- A comprehensive approach to security is necessary for effective risk management.Tune in to find out more! Contacting Surag & Arshan* Surag's LinkedIn: https://www.linkedin.com/in/suragpatel/* Arshan's LinkedIn: https://www.linkedin.com/in/arshan-dabirsiaghi/* Pixee: https://www.pixee.ai/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Jul 15, 2025 • 55min

Ken Johnson

In this episode, we talk to Ken Johnson, Co-Founder & CTO @ DryRun Security. Ken discusses the evolution of application security, focusing on the role of AI and LLMs in enhancing security practices. He emphasizes the importance of context engineering over traditional prompt engineering, the challenges of consistency and repeatability in LLM outputs, and the ethical considerations surrounding AI in security. The discussion also highlights the need for orchestration in AI applications and the future potential of AI in the security landscape.Key Takeaways- DryRun Security utilizes AI to enhance code security.- Context engineering is crucial for effective AI applications.- LLMs can augment security practices but require careful orchestration.- Consistency in LLM outputs is a significant challenge.- Ethical considerations in AI are becoming increasingly important.- Finding the right balance in using LLMs is essential.- Community collaboration is vital for advancing AI solutions.- Orchestration is a key factor in AI performance.- AI will not replace jobs but will change how we work.Tune in to find out more! Contacting Ken* LinkedIn: https://www.linkedin.com/in/cktricky/* DryRun Security: https://www.dryrun.security/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Jul 3, 2025 • 54min

Casey Ellis

In this episode, we talk to Casey Ellis, Founder & Advisor @Bugcrowd.Casey shares his personal journey through health challenges and his insights into the cybersecurity landscape. He discusses the evolution of the bug bounty industry, the importance of secure design, and the role of AI in both enhancing and complicating security measures. Casey emphasizes the need for accountability and the potential of crowdsourcing in security, while also addressing the challenges of implementing effective standards. The conversation concludes with reflections on the future of AI in security and the necessity for focused problem-solving in the industry.Key Takeaways- The bug bounty industry has transformed lives and created new opportunities.- Founding a company involves learning from both successes and failures.- The cybersecurity industry often focuses on quick wins rather than fundamental problems.- Secure by design is essential for addressing root causes of vulnerabilities.- Crowdsourcing can enhance accountability in security practices.- Standards like ASVS are important but can be complex to implement.- AI is both a tool and a threat in the cybersecurity landscape.- Focusing on specific problems is key to leveraging AI effectively.Tune in to find out more! Contacting Casey* LinkedIn: https://www.linkedin.com/in/caseyjohnellis/* Bugcrowd: https://www.bugcrowd.com/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Mar 9, 2025 • 47min

S2E10 - Vivek Ramachandran

In Season 2 Episode 10, we talk to Vivek Ramachandran, Founder  @SquareXTeam  .In this episode, Vivek shares his journey in cybersecurity, discussing the evolution of content creation, the importance of building for a global audience, and navigating the Indian cybersecurity market. He emphasizes the need for browser security, the challenges of local markets, and the significance of personal relationships in business. In this conversation, Vivek Ramachandran shares insights on the challenges faced by founders, particularly in breaking into the U.S. market. He emphasizes the importance of building a strong advisor network and engaging in technical conversations. The discussion also delves into the evolving landscape of cybersecurity, highlighting the impact of AI on both attackers and defenders. Vivek offers valuable advice for new startup founders, stressing the need for patience, understanding the responsibilities of fundraising, and focusing on fundamental skills.Key Takeaways- The browser is now considered the new endpoint for security.- Pentester Academy was born out of a need to share knowledge.- Content creation has evolved significantly over the years. Today's audience prefers bite-sized, impactful content.- Founders should think globally from the start.- Cybersecurity in India is often driven by compliance rather than necessity.- Technical founders must adapt to market needs and customer relationships.- Design partnerships can help startups gain traction in local markets. Founders often give up after a few rejections.- Building an advisor network is essential for success.- AI is changing the dynamics of cybersecurity.- Raising funds is a responsibility, not a success metric.- Focus on fundamentals to stay relevant in tech.- Learning by doing is becoming too easy with AI.- Engage with your target market to build credibility.Tune in to find out more! Contacting Vivek* LinkedIn: https://www.linkedin.com/in/vivekramachandran/* SquareX: https://www.sqrx.com/ Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Mar 3, 2025 • 44min

S2E9 - Ali Mesdaq

In Season 2 Episode 9, we talk to Ali Mesdaq, Founder & CEO @ Amplify Security.We discuss the evolution of security tools, the importance of customer validation, and the role of AI agents in enhancing security practices. Ali shares insights on building a positive security culture within organizations and how Amplify Security differentiates itself in a competitive market. The conversation emphasizes the need for collaboration between security and development teams, the challenges of addressing known and unknown vulnerabilities, and the future of AI in cybersecurity.Key Takeaways- Amplify helps coders secure their code effectively.- Customer validation is crucial for startup confidence.- Security tools should enhance developer experience.- AI agents can automate security fixes intelligently.- Contextual understanding is vital for security solutions.- Developers should approve code changes for security fixes.- A positive security culture fosters collaboration.- AI can help prioritize and manage vulnerabilities.- The future of security involves AI-driven solutions.- Security issues must be addressed in a timely manner.Tune in to find out more! Contacting Ali* LinkedIn: https://www.linkedin.com/in/amesdaq/* Akto: https://amplify.security/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app