AI Security Podcast

Kaizenteq Team
undefined
5 snips
Dec 10, 2025 • 39min

AI Paywall for Browsers & The End of the Open Web?

Cloudflare's new policy requires AI bots to pay for crawling web content, raising questions about the future of the open web. The hosts discuss how this could lead to a system where information is treated as currency. They explore the security implications, emphasizing the need for strict identity checks for AI and human access. A new open-source browser, Ladybird, is introduced as a competitor to Chromium, focusing on payment integration for content. The idea of browsers becoming payment gateways is also examined, hinting at a shift toward consumer micropayments.
undefined
8 snips
Dec 3, 2025 • 51min

Build vs. Buy in AI Security: Why Internal Prototypes Fail & The Future of CodeMender

The debate on whether to build or buy AI security tools heats up with insights on Google's CodeMender, which autonomously finds and fixes vulnerabilities. The challenges of scaling prototypes into production-grade solutions lead to alarming failures within 18 months. They discuss incentives for internal teams that drive unnecessary AI expansion, potentially igniting an AI bubble. Predictions emerge about the shift towards auto-personalized security products that adapt to environments, as the hype around 'agentic AI' raises more questions than answers.
undefined
49 snips
Nov 6, 2025 • 58min

Inside the 29.5 Million DARPA AI Cyber Challenge: How Autonomous Agents Find & Patch Vulns

Michael Brown, Principal Security Engineer at Trail of Bits and leader of the Buttercup project in DARPA's AI Cyber Challenge, shares insights into building autonomous AI systems for vulnerability detection. He reveals how Buttercup, despite its initial skepticism, impressed with high-quality patch generation thanks to a 'best of both worlds' approach combining AI with traditional methods. Michael also discusses the competition's unique challenges, the importance of robust engineering, and practical tips for applying AI in security tasks. The future of Buttercup aims at automatic bug fixes at scale for the open-source community.
undefined
34 snips
Oct 23, 2025 • 52min

Anthropic's AI Threat Report: Real Attacks, Simulated Competence & The Future of Defense

Dive into the alarming findings of a recent AI Threat Intelligence report. Discover how AI-enabled biohacking and extortion strategies are transforming cybercrime. Learn about North Korean IT workers leveraging AI to simulate technical skills for Fortune 500 jobs. Explore the rise of ransomware-as-a-service, making sophisticated attacks accessible to less skilled actors. The discussion also highlights gaps in identity verification and the complexities of AI in scaling fraud and malware, revealing a landscape where AI is professionalizing existing threats.
undefined
40 snips
Oct 18, 2025 • 1h 2min

How Microsoft Uses AI for Threat Intelligence & Malware Analysis

Thomas Roccia, a Senior Threat Researcher at Microsoft specializing in AI applications for malware analysis, discusses groundbreaking concepts like the 'Indicator of Prompt Compromise' (IOPC). He shares insights on his open-source projects, including NOVA, a tool to detect malicious prompts. The conversation explores using AI to track complex crypto laundering schemes, simplifying reverse engineering, and how AI enhances threat intelligence. Roccia also highlights the shift in skill accessibility, where advanced tasks become manageable for more professionals.
undefined
87 snips
Sep 9, 2025 • 1h 25min

The Future of AI Security is Scaffolding, Agents & The Browser

In this discussion, Jason Haddix, an offensive security expert from Arcanum, and Daniel Miessler, founder of Unsupervised Learning, dive into the 2025 landscape of AI security. They reveal how LLMs are leaking into broader ecosystems, becoming tools for malicious prompts and exploiting vulnerabilities. The duo highlights the critical yet unsolved problem of prompt injection and the challenges posed by privacy laws on incident response. They emphasize the need for innovative threat modeling and proactive security measures to navigate this evolving danger.
undefined
44 snips
Aug 22, 2025 • 52min

A CISO's Blueprint for AI Security (From ML to GenAI)

Damian Hasse, CISO of Moveworks and a security expert from Amazon's Alexa, offers a deep dive into AI security. He discusses how the current AI hype cycle differs from past failures and the importance of expertise in AI Councils. Hasse shares his framework for assessing AI risks, focusing on specific use cases and data protection. He addresses threats like prompt injection and outlines strategies to mitigate security risks in AI-assisted environments, making this a must-listen for security leaders navigating the complexities of modern AI.
undefined
27 snips
Jul 31, 2025 • 36min

Gen AI Threat Modeling vs. AI-Powered Defense:

Join Jackie Bow, the Technical Lead of Threat Detection Engineering at Anthropic, and Kane Narraway, who heads the Enterprise Security Team at Canva, as they dive deep into the dual-edged sword of AI in security. Jackie reveals how AI, specifically Claude, revolutionizes threat detection by breaking traditional barriers. In contrast, Kane emphasizes the risks tied to AI integrations, arguing that many challenges mirror existing vulnerabilities. Together, they explore innovative threat modeling strategies while balancing the need for strong security with the power of AI.
undefined
8 snips
Jun 27, 2025 • 1h

Vibe Coding for CISOs: Managing Risk & Opportunity in AI Development

Discover how 'Vibe Coding' transforms the role of non-engineers in software development, allowing rapid application deployment. Learn to harness AI tools for effective project management and overcome challenges in scaling coding projects. Explore the proactive strategies needed to navigate security risks with AI-generated applications. The discussion also emphasizes the significance of maintaining a structured approach to innovation while ensuring compliance. Plus, hear personal anecdotes that illustrate the balance between creativity and security in tech.
undefined
9 snips
Jun 12, 2025 • 49min

Vibe Coding, Slopsquatting, and the Future of AI in Software Development

In this engaging discussion, Guy Podjarny, founder of Snyk and Tessl, dives into the future of AI in software development. He introduces 'vibe coding,' where developers increasingly rely on AI-generated code with less oversight, sparking opportunities and significant risks. The conversation also touches on 'slopsquatting,' a new security threat from AI-generated fake library names. Guy emphasizes the shifting role of developers towards managing AI workflows and highlights the importance of clear specifications and rigorous testing in a rapidly evolving tech landscape.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app