Cloud Security Podcast by Google

Anton Chuvakin
undefined
Jan 26, 2026 • 30min

EP260 The Agentic IAM Trainwreck: Why Your Bots Need Better Permissions Than Your Admins

Vishwas Manral, CEO of Precize.ai and author on agentic AI risks, brings networking and security protocol experience. He explains how agents act as runtime app logic and why IAM for agents is uniquely tricky. The conversation covers early risk guidance, constraining agent permissions, shared responsibility across providers, and emerging AI-on-AI threats.
undefined
8 snips
Jan 19, 2026 • 34min

EP259 Why DeepMind Built a Security LLM Sec-Gemini and How It Beats the Generalists

Elie Burstein, a Distinguished Scientist at Google DeepMind, dives into the revolutionary Sec-Gemini, an AI tailored for cybersecurity. They discuss how it utilizes real-time data to enhance defensive measures and how it outperforms general AI in tasks like digital forensics and penetration testing. Elie shares insights on the motivations behind developing specialized AI for security, the challenges of deploying patches, and the unexpected use cases that emerged from testers. Tune in to discover how this innovative approach is redefining cyber defense!
undefined
Jan 12, 2026 • 32min

EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen

Guest: Royal Hansen, VP of Engineering at Google, former CISO of Alphabet Topics: The "God-Like Designer" Fallacy: You've argued that we need to move away from the "God-like designer" model of security—where we pre-calculate every risk like building a bridge—and towards a biological model. Can you explain why that old engineering mindset is becoming risky in today's cloud and AI environments? Resilience vs. Robustness: In your view, what is the practical difference between a robust system (like a fortress that eventually breaks) and a resilient system (like an immune system)? How does a CISO start shifting their team's focus from creating the former to nurturing the latter? Securing the Unknown: We're entering an era where AI agents will call other agents, creating pathways we never explicitly designed. If we can't predict these interactions, how can we possibly secure them? What does "emergent security" look like in practice? Primitives for Agents: You mentioned the need for new "biological primitives" for these agents—things like time-bound access or inherent throttling. Are these just new names for old concepts like Zero Trust, or is there something different about how we need to apply them to AI? The Compliance Friction: There's a massive tension between this dynamic, probabilistic reality and the static, checklist-based world of many compliance regimes. How do you, as a leader, bridge that gap? How do you convince an auditor or a board that a "probabilistic" approach doesn't just mean "we don't know for sure"? "Safe" Failures: How can organizations get comfortable with the idea of designing for allowable failure in their subsystems, rather than striving for 100% uptime and security everywhere? Resources: Video version EP189 How Google Does Security Programs at Scale: CISO Insights BigSleep and CodeMender agents "Chasing the Rabbit" book "How Life Works: A User's Guide to the New Biology" book
undefined
Jan 5, 2026 • 27min

EP257 Beyond the 'Kaboom': What Actually Breaks When OT Meets the Cloud?

Guest: Chris Sistrunk, Technical Leader, OT Consulting, Mandiant Topics: When we hear "attacks on Operational Technology (OT)" some think of Stuxnet targeting PLCs or even backdoored pipeline control software plot in the 1980s. Is this space always so spectacular or are there less "kaboom" style attacks we are more concerned about in practice? Given the old "air-gapped" mindset of many OT environments, what are the most common security gaps or blind spots you see when organizations start to integrate cloud services for things like data analytics or remote monitoring? How is the shift to cloud connectivity - for things like data analytics, centralized management, and remote access - changing the security posture of these systems? What's a real-world example of a positive security outcome you've seen as a direct result of this cloud adoption? How do the Tactics, Techniques, and Procedures outlined in the MITRE ATT&CK for ICS framework change or evolve when attackers can leverage cloud-based reconnaissance and command-and-control infrastructure to target OT networks? Can you provide an example? OT environments are generating vast amounts of operational data. What is interesting for OT Detection and Response (D&R)? Resources: Video version Cybersecurity Forecast 2026 report by Google Complex, hybrid manufacturing needs strong security. Here's how CISOs can get it done blog "Security Guidance for Cloud-Enabled Hybrid Operational Technology Networks" paper by Google Cloud Office of the CISO DEF CON 23 - Chris Sistrunk - NSM 101 for ICS MITRE ATT&CK for ICS
undefined
Dec 15, 2025 • 33min

EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance

Guest: Bruce Schneier Topics: Do you believe that AI is going to end up being a net improvement for defenders or attackers? Is short term vs long term different? We're excited about the new book you have coming out with your co-author Nathan Sanders "Rewiring Democracy". We want to ask the same question, but for society: do you think AI is going to end up helping the forces of liberal democracy, or the forces of corruption, illiberalism, and authoritarianism? If exploitation is always cheaper than patching (and attackers don't follow as many rules and procedures), do we have a chance here? If this requires pervasive and fast "humanless" automatic patching (kinda like what Chrome does for years), will this ever work for most organizations? Do defenders have to do the same and just discover and fix issues faster? Or can we use AI somehow differently? Does this make defense in depth more important? How do you see AI as changing how society develops and maintains trust? Resources: "Rewiring Democracy" book "Informacracy Trilogy" book Agentic AI's OODA Loop Problem EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking AI and Trust AI and Data Integrity EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 RSA 2025: AI's Promise vs. Security's Past — A Reality Check
undefined
Dec 8, 2025 • 30min

EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking

Heather Adkins, VP of Security Engineering at Google, shares her insights on the emerging threat of autonomous AI hacking. She discusses the term 'AI Hacking Singularity,' weighing the reality against hyperbole. Can AI achieve ‘machine velocity’ exploits without human input? Heather outlines potential worst-case scenarios, from global infrastructure collapses to waves of automated attacks. She also emphasizes the need for redefined defense strategies and the impact on the software supply chain, urging proactive engagement with regulators to navigate this complex threat landscape.
undefined
Dec 1, 2025 • 31min

EP254 Escaping 1990s Vulnerability Management: From Unauthenticated Scans to AI-Driven Mitigation

Caleb Hoch, a Consulting Manager at Mandiant, specializes in cyber defense and vulnerability management transformation. He discusses the outdated nature of vulnerability management practices that still linger since the 1990s. Caleb explains why many organizations shy away from authenticated scans due to fear and resource issues. He outlines a gold-standard prioritization process for 2025 that incorporates contextual factors. Additionally, he warns of AI's rapid impact on exploit development, emphasizing the urgent need for effective mitigation strategies.
undefined
Nov 24, 2025 • 28min

EP253 The Craft of Cloud Bug Hunting: Writing Winning Reports and Secrets from a VRP Champion

Sivanesh Ashok and Sreeram KL, both accomplished bug bounty hunters and top contributors to Google's Cloud Vulnerability Reward Program, share their expertise on cloud security. They discuss the art of writing clear and effective bug reports, emphasizing reproducibility to aid triage. The duo dives into the dynamics of collaboration in bug hunting and how to navigate volatility in the field. They reveal insights on targeting integration bugs and offer invaluable advice for aspiring hunters: consistency, patience, and a deep understanding of threat models.
undefined
14 snips
Nov 17, 2025 • 36min

EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success

In this discussion, Alexander Pabst, Deputy Group CISO at Allianz, and Lars Koenig, Global Head of Detection & Response, explore the transformative journey of moving from traditional security information and event management (SIEM) to an agentic SOC model. They delve into the intricacies of governing AI agents, emphasizing the balance between automation and necessary human oversight. The guests share insights on enhancing data fidelity, unexpected challenges during implementation, and the dramatic efficiency gains achieved, including saving 68 analyst-years per quarter.
undefined
Nov 10, 2025 • 25min

EP251 Beyond Fancy Scripts: Can AI Red Teaming Find Truly Novel Attacks?

Ari Herbert-Voss, Founder and CEO of RunCybil and former security lead at OpenAI, dives into AI-powered red teaming. He discusses how Sybil automates discovery, testing, and remediation of security flaws, particularly excelling at finding tricky authentication bugs. The conversation addresses the balance of augmenting human efforts without replacing them entirely and the importance of actionable insights for development teams. Ari also shares real-world successes, showcasing how Sybil can uncover significant vulnerabilities rapidly while scaling security efforts.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app