
The Future of Developer Security with Travis McPeak
The Boring AppSec Podcast
Vendor strategy and integrating best tools
They discuss buy vs build, integration costs, and when platforms add value versus unnecessary bundled features.
In this episode, we sit down with Travis McPeak, one of the most prominent thinkers in the space of developer security. Travis, who built his career at the intersection of security automation and developer productivity, shares his philosophy on achieving security at scale in the AI era. His career spans security leadership roles at major tech companies, including Symantec, IBM, Netflix, and Databricks. Most recently, he founded and served as CEO of Resourcely, a startup built on the idea of making cloud infrastructure secure by default, before being "acqui-hired" by Cursor, the rapidly growing AI-powered code editor, to lead security and enterprise readiness.
Key Takeaways
- AI for Secure by Default: AI tools provide the best injection point to shift security "all the way left" and move past the reactive "whack-a-mole" approach, because developers are already motivated to use these highly effective tools.
- Changing AppSec Strategy: AI dramatically changes the nature of AppSec by making previously unscalable strategies, such as threat modeling, applicable. AI can generate architecture diagrams on demand by tracing through code.
- The Compliance Bottleneck: The dramatic consolidation of cloud security vendors reflects how compliance-minded the security industry remains. Critical infrastructure misconfigurations (like public databases being left open) often go unaddressed because they are not measured by compliance standards.
- Platform vs. Point Solutions: Travis argues against platforms that are often amalgamations of poorly integrated acquired tools. He suggests buying the single best point solution for a high-leverage problem and using AI capabilities to operationalize and wire it into internal systems, thereby simplifying integrations that platforms traditionally provide.
- The Skeptical Coder: A fundamental limitation of Large Language Models (LLMs) is their desire to "make you happy," causing them to provide answers even if they are incorrect. Therefore, engineers must use AI output only as a starting point and only consider the code finished when they understand it fully end to end.
- Prompt Injection Defined: Prompt injection is confirmed as a legitimate vulnerability, essentially a rehash of old issues like cross-site scripting and SQL injection, arising from the improper separation between the LLM instruction and the user instruction.
Tune in for a deep dive!
Contacting Travis
* LinkedIn: https://www.linkedin.com/in/travismcpeak/
* Company Website: https://www.cursor.com
Contacting Anshuman
* LinkedIn: https://www.linkedin.com/in/anshumanbhartiya/
* X: https://x.com/anshuman_bh
* Website: https://anshumanbhartiya.com/
* Instagram: https://www.instagram.com/anshuman.bhartiya
Contacting Sandesh
* LinkedIn: https://www.linkedin.com/in/anandsandesh/
* X: https://x.com/JubbaOnJeans
* Website: https://boringappsec.substack.com/


