

Zero Signal
Conor Sherman
Zero Signal is a high-energy podcast for cybersecurity leaders, co-hosted by Conor Sherman and Stuart Mitchell. It uniquely blends current events with in-depth conversations featuring seasoned security executives, thinkers, and builders. The podcast addresses critical questions regarding the future of cybersecurity in an AI-driven landscape, focusing on what works, what's broken, and what's next, particularly as AI redefines trust in the digital world.
Episodes
Mentioned books

Sep 19, 2025 • 41min
The AI Divide, Orphaned Agents, and Ransomware That Negotiates Back
AI is redrawing the economic map while vendors rush to “platformize” and attackers weaponize LLMs. Leaders must push for real platforms (shared data planes + policy layers), avoid “platform-in-name-only” lock-in, and prepare for agentic threats like PromptLock.Key Topics & Timestamps(00:00) Introduction — Why this week matters: AI divide, platformization reality check, agentic ransomware.(02:10) Topic 1 — The AI Divide; Anthropic’s index shows productivity clustering in high-adoption regions; implications for hiring, policy, and multi-national execution.(12:00) Topic 2 — Platformization & Consolidation; CrowdStrike–Pangea and Check Point–Lakera signal AI-security land grab; what “true platform” means; buyer guardrails.(22:40) Topic 3 — PromptLock & Agentic Threats; ransomware that personalizes and negotiates; how to update IR/comms playbooks.(31:30) Closing — Play offense: evidence-based platformization, workforce redesign, agentic blue-team prep.Resources & ReferencesArticles / StudiesAnthropic: Economic Index — global AI adoption & productivityHR Grapevine: Zoom chief predicts three-day workweeks & role erosionWall Street Journal: CrowdStrike to buy AI security company PangeaCyberScoop: Check Point to acquire Lakera for AI securityESET / WeLiveSecurity: PromptLock ransomware uses ChatGPT/LLMsAI Darwin Awards: Taco Bell drive-thru fiascoVenture in Security (Ross Haleliuk): Consolidation & platformization essays | LinkedIn activityTools / FrameworksNIST AI RMF — governance + risk controls: https://www.nist.gov/itl/ai-risk-management-frameworkOWASP GenAI / LLM Top 10 — threat categories: https://genai.owasp.org/llm-top-10/

Sep 17, 2025 • 44min
Navigating the Cybersecurity Economy ft. Mike Privette
Quick Take (TL;DR)This episode examines the evolving cybersecurity economy, the impact of AI on security roles and investments, and why trust, adaptability, and community are more crucial than ever for security leaders.Key Topics & Timestamps(00:00) Introduction — Mike’s journey as the first security hire at a FinTech and the realities of building trust in security leadership.(04:32) Security Leadership — Strategies for first-time CISOs, balancing technical depth with business needs, and the importance of level-setting expectations.(08:36) The Cybersecurity Economy — Mike’s five-pillar framework: investment, government, regulation, labor market, and community.(13:07) AI’s Impact — How AI is reshaping security investments, the rise of AI-enabled tools, and the explosion of red teaming for AI applications.(20:09) Evolving Roles — The growing importance of AI governance, the dual mandate for CISOs, and the enduring need for fundamentals like authentication and identity.(34:34) Mike’s advice on building a personal brand, sharing experiences.(41:27) The future of Return on Security.(43:44) ClosingGuest SpotlightMike Privette is the founder of Return on Security, recognized as the industry’s first cybersecurity economist. He’s known for his in-depth analysis of funding trends, M&A, and the shifting landscape of security and AI. Mike’s work has been featured at B-Sides and followed by thousands of industry leaders.Connect with Mike: LinkedIn | Newsletter.Resources & ReferencesArticles / StudiesMike’s annual cybersecurity funding reports: Return on Security NewsletterTools / FrameworksAI Red Teaming (general concept, not a specific tool)Mike’s Five-Pillar Cybersecurity Economy Framework (investment, government, regulation, labor, community)Call to ActionConor Sherman — LinkedIn | Website | Sysdig;Stuart Mitchell — LinkedIn | Website.Subscribe: Apple Podcasts | Spotify | YouTube | Website

Sep 12, 2025 • 39min
Talent Shifts, Safer AI, and the Jobs Cooldown
SummaryIn this episode, Conor Sherman and Stuart Mitchell discuss the evolving landscape of education, job markets, and AI regulation. They explore the implications of Gen Z's shifting attitudes towards college, the impact of AI on job security, and the recent endorsement of AI safety legislation by Anthropic. The conversation also delves into the current job market trends, the integration of AI in security teams, and the alarming advancements in exploit development through tools like CVE Genie. ArticlesAxios: Gen Z still choosing college despite AI anxietiesPBS NewsHour: Why many in Gen Z are ditching college for training in skilled tradesAxios: Jobs data shows hiring momentum slowdownMoneywise: US has more unemployed than job openings for first time since 2021TechCrunch: Anthropic endorses California’s AI safety bill SB-53Anthropic: Anthropic is endorsing SB-53OpenAI blog: Why language models hallucinateOpenAI paper PDF: Why Language Models Hallucinate Follow for MoreConor Sherman — LinkedIn | Website | Sysdig;Stuart Mitchell — LinkedIn | Website.Add subscription links: Apple Podcasts | Spotify | YouTube | Website.

10 snips
Sep 10, 2025 • 45min
AGI and Employment: A Double-Edged Sword ft Daniel Miessler
Daniel Miessler, a cybersecurity expert and the creator of Unsupervised Learning, discusses the future of work in an AI-dominated world. He explores the unsettling possibility of a 'zero-employee' ideal and its implications for society and security. The conversation digs into the emotional turmoil CEOs face during layoffs and the urgent need for new economic structures like Universal Basic Income. Additionally, Miessler emphasizes the importance of curiosity and critical thinking for future workers to navigate the challenges posed by AI.

Sep 9, 2025 • 38min
Back to School, Back to Basics: AI, Coding, and Security Fundamentals
Conor Sherman and Stuart Mitchell dive into the intersection of AI, coding, security, and leadership. They discuss the “September Surge” in hiring, the evolving role of AI in software development, and the critical need for strong security fundamentals as organizations accelerate their adoption of AI technologies. The conversation covers the risks and rewards of AI-driven coding, the responsibilities of security teams, and the importance of leadership and organizational change in navigating this new landscape.Key Topics CoveredThe “back to school” energy in the hiring market and what it means for tech teamsHow AI is shifting from an option to a directive in technology strategyBalancing speed and security: the risks of increased code output from AI assistantsThe fundamentals of security and why they matter more than everThe human element in AI leadership and organizational changeReal-world risks: prompt injection, agentic browsers, and exposed LLM serversAdapting security controls for AI with frameworks like NIST’s COSAISFeatured Links & Resources4x Velocity, 10x Vulnerabilities: AI Coding Assistants Are Shipping More Risks: Read the Apiiro blogSysdig 2025 Cloud-Native Security Report. Read the Sysdig reportCisco: Detecting Exposed LLM Servers (Ollama/Shodan Study). Read the Cisco blogBrave Research: Indirect Prompt Injection in Perplexity Comet: Read the Brave blogNIST CSRC: Control Overlays for Securing AI Systems (COSAIS) – Concept Paper: Read the NIST concept paper

Sep 3, 2025 • 48min
Challenging Trust in AI Systems ft Keith Hoodlet
Quick Take (TL;DR)LLMs don’t think—they predict. Keith Hoodlet shows what this means for CISOs facing bias, slopsquatting, MCP risks, and burnout.Guest SpotlightKeith Hoodlet is Engineering Director at Trail of Bits. He previously led at GitHub and Rapid7, co-founded Application Security Weekly, and launched the InfoSec Mentors Project.LinkedIn | Website | NewsletterResources & ReferencesBooksAI Snake OilFour Thousand WeeksArticles / StudiesMarine Corps Times2025 Cloud‐Native Security and Usage ReportThe Register: SlopsquattingTools / FrameworksModel Context ProtocolNVIDIA NeMo GuardrailsMeta Llama GuardCall to ActionIf this episode reshaped how you think about AI security, share it. Connect with your hosts:Conor Sherman — LinkedIn | Website | Sysdig;Stuart Mitchell — LinkedIn | Website.Subscribe to Zero Signal: Apple | Spotify | YouTube | Website

Sep 1, 2025 • 34min
AI Ethics and Global Standards ft. Olivia Phillips
Quick Take (TL;DR)AI is rapidly transforming cybersecurity, demanding new frameworks for trust, leadership, and risk. Olivia Phillips shares why integrating security and ethics from the ground up is essential as organizations re-platform for an AI-driven future. Guest SpotlightOlivia Phillips is Vice President and US Chapter Chair of the Global Council of Responsible AI and founder of Wolf by Technology. With over 20 years in cybersecurity, she began in malware analysis and forensics and is now a leading voice on AI ethics, risk, and leadership. Connect with Olivia on LinkedIn. Call to ActionIf you found this episode useful, please share it and subscribe!Conor Sherman — LinkedIn | Website | SysdigStuart Mitchell — LinkedIn | WebsiteSubscribe: Apple Podcasts | Spotify | YouTube | Website

Aug 19, 2025 • 36min
The Role of CISOs in AI Innovation ft. Ashish Rajan
In this conversation, Ashish Rajan, the founder of TechRiot.io discusses the evolving landscape of AI security, emphasizing the challenges faced by security leaders as AI technologies rapidly advance. He highlights the need for CISOs to balance innovation with security, the importance of trust in AI systems, and the frameworks that can guide organizations in navigating these changes. The discussion also covers the layered security approach necessary for AI applications and the role of human oversight in AI decision-making.TakeawaysAI is transforming the security landscape, creating new risks.CISOs must adapt to rapid changes in technology and security.Trust in AI is built on transparency and reliability.Organizations need to establish frameworks for AI governance.Human oversight is essential in AI decision-making processes.Authorization remains a significant challenge in cybersecurity.The pace of AI adoption is faster than previous technological shifts.Security hygiene is crucial to prevent incidents.AI's integration into business processes requires careful management.Collaboration across departments is vital for effective AI governance.