AI Security Podcast

Kaizenteq Team
undefined
Jan 28, 2026 • 1h 1min

AI Security 2026 Predictions: The "Zombie Tool" Crisis & The Rise of AI Platforms

Predictions about an incoming “zombie tool” crisis where unmaintained internal AI tools rot as staff churn. Debate over rising, possibly fixed AI token costs and the shift from many features to centralized AI platform teams. Discussion of a capability plateau where models improve but feel the same, plus persistent threats like prompt injection and identity-related “confused deputy” risks.
undefined
46 snips
Jan 23, 2026 • 51min

Why AI Agents Fail in Production: Governance, Trust & The "Undo" Button

Dev Rishi, GM of AI at Rubrik and former Predibase CEO, shares lessons from building and deploying generative AI for enterprises. He discusses why agents stall in read-only mode, the three top IT fears—shadow agents, governance, and the need to undo damage—and the concept of Agent Rewind. The conversation also covers real-time policy enforcement, using small language models as judges, and protocol debates like MCP vs A2A.
undefined
17 snips
Dec 19, 2025 • 1h 3min

AI Security 2025 Wrap: 9 Predictions Hit & The AI Bubble Burst of 2026

Reflecting on 2025, the hosts reveal their accuracy in predictions, triumphantly hitting 9 out of 9. They discuss the impact of SOC automation, the struggles of AI production systems, and the surge in AI Red Teaming amid rising costs. Looking to 2026, they boldly predict the inevitable bursting of the AI bubble and the rise of self-fine-tuning models. They raise eyebrows over the role of 'AI Engineers' and share insights on data security's increasing importance due to regulatory pressures. A year-end wrap that’s both insightful and entertaining!
undefined
5 snips
Dec 10, 2025 • 39min

AI Paywall for Browsers & The End of the Open Web?

Cloudflare's new policy requires AI bots to pay for crawling web content, raising questions about the future of the open web. The hosts discuss how this could lead to a system where information is treated as currency. They explore the security implications, emphasizing the need for strict identity checks for AI and human access. A new open-source browser, Ladybird, is introduced as a competitor to Chromium, focusing on payment integration for content. The idea of browsers becoming payment gateways is also examined, hinting at a shift toward consumer micropayments.
undefined
9 snips
Dec 3, 2025 • 51min

Build vs. Buy in AI Security: Why Internal Prototypes Fail & The Future of CodeMender

The debate on whether to build or buy AI security tools heats up with insights on Google's CodeMender, which autonomously finds and fixes vulnerabilities. The challenges of scaling prototypes into production-grade solutions lead to alarming failures within 18 months. They discuss incentives for internal teams that drive unnecessary AI expansion, potentially igniting an AI bubble. Predictions emerge about the shift towards auto-personalized security products that adapt to environments, as the hype around 'agentic AI' raises more questions than answers.
undefined
49 snips
Nov 6, 2025 • 58min

Inside the 29.5 Million DARPA AI Cyber Challenge: How Autonomous Agents Find & Patch Vulns

Michael Brown, Principal Security Engineer at Trail of Bits and leader of the Buttercup project in DARPA's AI Cyber Challenge, shares insights into building autonomous AI systems for vulnerability detection. He reveals how Buttercup, despite its initial skepticism, impressed with high-quality patch generation thanks to a 'best of both worlds' approach combining AI with traditional methods. Michael also discusses the competition's unique challenges, the importance of robust engineering, and practical tips for applying AI in security tasks. The future of Buttercup aims at automatic bug fixes at scale for the open-source community.
undefined
34 snips
Oct 23, 2025 • 52min

Anthropic's AI Threat Report: Real Attacks, Simulated Competence & The Future of Defense

Dive into the alarming findings of a recent AI Threat Intelligence report. Discover how AI-enabled biohacking and extortion strategies are transforming cybercrime. Learn about North Korean IT workers leveraging AI to simulate technical skills for Fortune 500 jobs. Explore the rise of ransomware-as-a-service, making sophisticated attacks accessible to less skilled actors. The discussion also highlights gaps in identity verification and the complexities of AI in scaling fraud and malware, revealing a landscape where AI is professionalizing existing threats.
undefined
40 snips
Oct 18, 2025 • 1h 2min

How Microsoft Uses AI for Threat Intelligence & Malware Analysis

Thomas Roccia, a Senior Threat Researcher at Microsoft specializing in AI applications for malware analysis, discusses groundbreaking concepts like the 'Indicator of Prompt Compromise' (IOPC). He shares insights on his open-source projects, including NOVA, a tool to detect malicious prompts. The conversation explores using AI to track complex crypto laundering schemes, simplifying reverse engineering, and how AI enhances threat intelligence. Roccia also highlights the shift in skill accessibility, where advanced tasks become manageable for more professionals.
undefined
87 snips
Sep 9, 2025 • 1h 25min

The Future of AI Security is Scaffolding, Agents & The Browser

In this discussion, Jason Haddix, an offensive security expert from Arcanum, and Daniel Miessler, founder of Unsupervised Learning, dive into the 2025 landscape of AI security. They reveal how LLMs are leaking into broader ecosystems, becoming tools for malicious prompts and exploiting vulnerabilities. The duo highlights the critical yet unsolved problem of prompt injection and the challenges posed by privacy laws on incident response. They emphasize the need for innovative threat modeling and proactive security measures to navigate this evolving danger.
undefined
44 snips
Aug 22, 2025 • 52min

A CISO's Blueprint for AI Security (From ML to GenAI)

Damian Hasse, CISO of Moveworks and a security expert from Amazon's Alexa, offers a deep dive into AI security. He discusses how the current AI hype cycle differs from past failures and the importance of expertise in AI Councils. Hasse shares his framework for assessing AI risks, focusing on specific use cases and data protection. He addresses threats like prompt injection and outlines strategies to mitigate security risks in AI-assisted environments, making this a must-listen for security leaders navigating the complexities of modern AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app