Cloud Security Podcast by Google

Anton Chuvakin
undefined
May 12, 2025 • 31min

EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps

Guest: Diana Kelley, CSO at Protect AI  Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better  when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks?  Top differences between LLM/chatbot AI security vs AI agent security?  Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents  (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
undefined
8 snips
May 5, 2025 • 32min

EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025

Hosts share their insights from the RSA cybersecurity conference, revealing a mix of excitement and skepticism about AI in cloud security. They analyze the potential of AI SOCs while cautioning against the pitfalls of automation. The reliance on outdated security technology is debated, alongside the importance of human oversight in AI applications. Humorous anecdotes lighten the discussion, including memorable marketing strategies and adventures at the event. Ultimately, the conversation navigates the evolving landscape of AI-native technologies versus adding AI to existing platforms.
undefined
Apr 28, 2025 • 35min

EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends

In this engaging discussion, Kirstie Failey from the Google Threat Intelligence Group and Scott Runnels from Mandiant Incident Response dive into the art of transforming incident reports into the M-Trends report. They explore the paradox of learning from past incidents versus proactive security measures. The duo uncovers the complexities of 'dwell time' metrics and why repeated security mistakes persist. They also discuss the unique challenges faced by smaller organizations and the necessity of effective storytelling in cybersecurity reporting. A must-listen for security enthusiasts!
undefined
Apr 23, 2025 • 30min

EP221 Special - Semi-Live from Google Cloud Next 2025: AI, Agents, Security ... Cloud?

The chaotic vibes of a live conference set the stage for insightful talks on AI’s growing role in security. Discussions unveiled the Model Armor initiative and the evolving integration of AI with cybersecurity. Surprising trends and marketing strategies caught attention, while a hopeful outlook emerged for transforming Security Operations Centers. The urgency for security professionals to adopt AI was emphasized, with a clear warning: adapt or risk falling behind in this fast-evolving landscape.
undefined
Apr 21, 2025 • 29min

EP220 Big Rewards for Cloud Security: Exploring the Google VRP

Guests: Michael Cote, Cloud VRP Lead, Google Cloud Aadarsh Karumathil, Security Engineer, Google Cloud Topics: Vulnerability response at cloud-scale sounds very hard! How do you triage vulnerability reports and make sure we’re addressing the right ones in the underlying cloud infrastructure? How do you determine how much to pay for each vulnerability? What is the largest reward we paid? What was it for? What products get the most submissions? Is this driven by the actual product security or by trends and fashions like AI? What are the most likely rejection reasons?  What makes for a very good - and exceptional? - vulnerability report? We hear we pay more for “exceptional” reports, what does it mean? In college Tim had a roommate who would take us out drinking on his Google web app vulnerability rewards. Do we have something similar for people reporting vulnerabilities in our cloud infrastructure? Are people making real money off this?  How do we actually uniquely identify vulnerabilities in the cloud? CVE does not work well, right? What are the expected risk reduction benefits from Cloud VRP? Resources: Cloud VRP site Cloud VPR launch blog CVR: The Mines of Kakadûm
undefined
10 snips
Apr 14, 2025 • 32min

EP219 Beyond the Buzzwords: Decoding Cyber Risk and Threat Actors in Asia Pacific

Steve Ledzian, APAC CTO at Mandiant, dives into the evolving landscape of cybersecurity in the Asia Pacific region. He discusses how many boards still see cyber risks solely as technical issues, missing critical human factors. Steve tackles the confusing jargon plaguing the industry, emphasizing clear communication. He highlights unexpected benefits from the Google-Mandiant merger and shares insights on reducing dwell time in cyber incidents. Finally, he forecasts significant cybersecurity challenges ahead and what organizations should do now to prepare.
undefined
Apr 7, 2025 • 30min

EP218 IAM in the Cloud & AI Era: Navigating Evolution, Challenges, and the Rise of ITDR/ISPM

Henrique Teixeira, Senior VP of Strategy at Saviynt and former Gartner analyst, dives into the evolution of Identity and Access Management (IAM) amidst cloud and AI advancements. He addresses the challenges and opportunities these shifts create, particularly with ITDR (Identity Threat Detection and Response) and ISPM (Identity Security Posture Management). The discussion explores the unique security needs of machine identities versus human identities, as well as tips for creating memorable tech acronyms, blending humor with valuable insights on identity management.
undefined
Mar 31, 2025 • 23min

EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes?

In a fascinating discussion, Alex Polyakov, CEO of Adversa AI and expert in AI red teaming, dives into the vulnerabilities plaguing AI systems. He recounts a memorable red teaming exercise that unveiled surprising flaws. Polyakov highlights emerging threats like linguistic-based attacks and emphasizes how classic security mistakes resurface in AI. He critiques the industry's misconceptions about AI security and prompts organizations to rethink their cyber frameworks. Furthermore, he discusses the irony of using AI to safeguard AI, raising essential questions about the future of technology.
undefined
14 snips
Mar 24, 2025 • 32min

EP216 Ephemeral Clouds, Lasting Security: CIRA, CDR, and the Future of Cloud Investigations

In this enlightening discussion, James Campbell, CEO of Cado Security, and Chris Doman, CTO, dive into the evolving landscape of cloud security. They clarify the differences between Cloud Detection and Response (CDR) and Cloud Investigation and Response Automation (CIRA), highlighting the critical role automation plays in enhancing security. The conversation explores the challenges of ephemeral cloud infrastructure and its impact on compliance. Listeners will gain insights into how modern SIEM/SOAR systems can integrate with CIRA for better cloud security strategies.
undefined
18 snips
Mar 17, 2025 • 26min

EP215 Threat Modeling at Google: From Basics to AI-powered Magic

Meador Inge, a security engineer at Google, dives into the intricacies of threat modeling, detailing its essential steps and applications in complex systems. He explains how Google continuously updates its threat models and operationalizes the information to enhance security. The conversation explores the challenges faced in scaling threat modeling practices and how AI, particularly large language models like Gemini, is reshaping the landscape. With a humorous twist, Inge shares insights into unexpected threats and effective strategies for organizations starting their threat modeling journey.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app