She Said Privacy/He Said Security cover image

She Said Privacy/He Said Security

Latest episodes

undefined
Jun 18, 2025 • 29min

Agentic AI for Software Security: Eliminate More Vulnerabilities, Triage Less

Ian Riopel is the CEO and Co-founder of Root, applying agentic AI to fix vulnerabilities instantly. A US Army veteran and former Counterintelligence Agent, he’s held roles at Cisco, CloudLock, and Rapid7. Ian brings military-grade security expertise to software supply chains. John Amaral is the CTO and Co-founder of Root. Previously, he scaled Cisco Cloud Security to $500M in revenue and led CloudLock to a $300M acquisition. With five exits behind him, John specializes in building cybersecurity startups with strong technical vision. In this episode… Patching software vulnerabilities remains one of the biggest security challenges for many organizations. Security teams are often stretched thin as they try to keep up with vulnerabilities that can quickly be exploited. Open-source components and containerized deployments add even more complexity, especially when updates risk breaking production systems. As compliance requirements tighten and the volume of vulnerabilities grows, how can businesses eliminate software security risks without sacrificing productivity? Companies like Root are transforming how organizations approach software vulnerability remediation by applying agentic AI to streamline their approach. Rather than relying on engineers to triage and prioritize thousands of issues, Root’s AI-driven platform scans container images, applies safe patches where available, and generates custom patches for outdated components that lack official fixes. Root's AI automation resolves approximately 95% or more vulnerabilities without breaking production systems, allowing organizations to meet compliance requirements while developers stay focused on building and delivering software. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Ian Riopel and John Amaral, Co-founders of Root, about how AI streamlines software vulnerability detection. Together, they explain how Root’s agentic AI platform uses specialized agents to automate patching while maintaining software stability. John and Ian also discuss how regulations and compliance pressures are driving the need for faster remediation, and how Root differs from threat detection solutions. They also explain how AI can reduce security workloads without replacing human expertise.
undefined
Jun 12, 2025 • 28min

Operationalizing Privacy Across Teams, Tools, and Tech

Sarah Stalnecker is the Global Privacy Director at New Balance Athletics, Inc., where she leads the integration of privacy principles across the organization, driving awareness and compliance through education, streamlined processes, and technology solutions. In this episode… Operationalizing privacy programs starts with translating legal requirements into actions that work across teams. This means aligning privacy with existing tools and workflows while meeting evolving privacy regulations and adapting to new technologies. Today’s consumers also demand both personalization and privacy, and building trust means fulfilling these expectations without crossing the line. So, how can companies build a privacy program that meets regulatory requirements, integrates into daily operations, and earns consumer trust?  Embedding privacy into business operations involves more than just meeting regulatory requirements. It requires cultural change, leadership buy-in, and teamwork. Rather than forcing company teams to adapt to new privacy processes, organizations need to embed privacy requirements into existing workflows and systems that departments already use. Leading with consumer expectations instead of legal mandates helps shift mindsets and encourages collaborative dialogue about responsible data use. Documenting AI use cases and establishing an AI governance program also helps assess risks without reactive scrambling. Teams should also leverage privacy technology to scale processes and streamline compliance to ensure privacy becomes an embedded, organization-wide function rather than a siloed concern. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Sarah Stalnecker, Global Privacy Director at New Balance Athletics, about operationalizing privacy programs. Sarah shares how her team approaches data collection, embeds privacy into existing workflows, and uses consumer expectations to drive internal engagement. She also highlights the importance of documenting AI use cases and establishing AI governance to assess risk. Sarah provides tips on selecting and evaluating privacy technology and how to measure privacy program success beyond traditional metrics.
undefined
Jun 5, 2025 • 22min

Outsmarting Threats: How AI is Changing the Cyber Game

Brett Ewing is the Founder and CEO of AXE.AI, a cutting-edge cybersecurity SaaS start-up, and the Chief Information Security Officer at 3DCloud. He has built a career in offensive cybersecurity, focusing on driving exponential improvement. Brett progressed from a Junior Penetration Tester to Chief Operating Officer at Strong Crypto, a provider of cybersecurity solutions. He brings over 15 years of experience in information technology, with the past six years focused on penetration testing, incident response, advanced persistent threat simulation, and business development. He holds degrees in secure systems administration and cybersecurity, and is currently completing a Masters in cybersecurity with a focus area in AI/ML security at the SANS Technology Institute. Brett also holds more than a dozen certifications in IT, coding, and security from the SANS Institute, CompTIA, AWS, and other industry vendors. In this episode… Penetration testing plays a vital role in cybersecurity, but the traditional manual process is often slow and resource-heavy. Traditional testing cycles can take weeks, creating gaps that leave organizations vulnerable to fast-moving threats. With growing interest in more efficient approaches, organizations are exploring new AI tools to automate tasks like tool configuration, project management, and data analysis. How can cybersecurity teams use AI to test environments faster without increasing risk? AXE.AI offers an AI-powered platform that supports ethical hackers and red teamers by automating key components of the penetration testing process. The platform reduces overhead by configuring tools, analyzing output, and building task lists during live engagements. This allows teams to complete high-quality tests in days instead of weeks. AXE.AI’s approach supports complex environments, improves data visibility for testers, and scales efficiently across enterprise networks. The company emphasizes a human-centered approach and advocates for workforce education and training as a foundation for secure AI adoption. In today’s episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Brett Ewing, Founder and CEO of AXE.AI, about leveraging AI for offensive cybersecurity. Brett explains how AXE.AI’s platform enhances penetration testing and improves speed and coverage for large-scale networks. He also shares how AI is changing both attack and defense strategies, highlighting the risks posed by large language models (LLMs) and deepfakes, and explains why investing in continuous workforce training remains the most important cyber defense for companies today.
undefined
May 29, 2025 • 32min

Privacy Reform Is Coming to Australia: What Businesses Need To Know

James Patto is the Partner of Helios Salinger. He is a leading voice in Australia’s tech law landscape, trusted by business and government on privacy, cybersecurity, and AI issues. With over a decade of experience as a digital lawyer, he helps organizations turn regulation into opportunity — bridging law, innovation, and strategy to build trust and thrive in a digital world. In this episode… Australian privacy law stands at a critical juncture as organizations potentially face the country's most significant regulatory transformation yet. While the current principles-based Australian Privacy Act has been the foundation for over a decade, it contains notable gaps, like limited individual rights and broad exemptions for small businesses, employee data, and political parties. 84% of Australians want more control over how their personal information is collected and used, and with recent enforcement changes introducing civil penalties and on-the-spot fines, regulators now have stronger tools to hold organizations accountable. As lawmakers consider the next phase of reforms, how can businesses prepare for new compliance requirements while navigating an uncertain implementation timeline?  Businesses can adapt to evolving privacy regulations and position themselves for success by strengthening their current privacy practices, including focusing on privacy notice quality, direct marketing opt-out procedures, and data breach response notice accuracy. Conducting a privacy maturity assessment and implementing streamlined, risk-based privacy impact assessments can help identify gaps and prepare for new compliance obligations. It’s also critical for organizations to understand the data they collect, where it resides, how it’s used, shared, or sold by building a comprehensive data inventory. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with James Patto, Partner at Helios Salinger, about the current state and future of Australia’s privacy law. James discusses the major shifts in Australia’s privacy landscape and the broader implications for businesses. He shares how Australia’s strong small business sector influences privacy policymaking and how the Privacy Review Report's 89 proposals might reshape Australia's regulatory framework. James also explores the differences between Australia’s privacy law and the GDPR, the timeline for proposed reforms, and what companies should do now to prepare.
undefined
May 22, 2025 • 30min

Terms, Tech & Trust: A Privacy Deep Dive With Harvey AI

Anita Gorney is the Head of Privacy and AI Legal at Harvey. Harvey is an AI tool for legal professionals and professional service providers. Before Harvey, she was Privacy Counsel at Stripe. Anita studied law in Sydney and began her career there before moving to London and then New York. In this episode… Legal professionals often spend time on manual tasks that are repetitive and difficult to scale. Emerging AI platforms, like Harvey AI, are addressing this challenge by offering tools that help lawyers handle tasks such as legal research and contract review more efficiently. As legal professionals adopt AI to streamline their work, they are placing greater focus on data confidentiality and the secure handling of client information. Harvey AI addresses these concerns through its strict privacy and security controls, customer-controlled retention and deletion settings, and a commitment to not train on customer data. Harvey AI provides a purpose-built platform tailored for legal professionals. The company’s suite of tools — Assistant, Vault, and Workflow — automates repetitive legal work like summarizing documents, performing contract reviews, and managing due diligence processes. Harvey AI emphasizes privacy and security through features like zero data retention, encrypted processing, and workspace isolation, ensuring customer data remains confidential and is never used for model training. With a transparent, customer-first approach, Harvey AI empowers legal teams to work more efficiently without compromising trust or user data. In this episode of the She Said Privacy/He Said Security Podcast, Jodi and Justin Daniels speak with Anita Gorney, Head of Privacy and AI Legal at Harvey AI, about how legal professionals use specialized artificial intelligence to streamline their work. Anita explains how Harvey AI's platform helps with tasks like contract analysis and due diligence, while addressing privacy and security concerns through measures like customizable data retention periods and workspace isolation. She also discusses the importance of privacy by design in AI tools, conducting privacy impact assessments, and implementing user-controlled privacy settings.
undefined
May 15, 2025 • 31min

Silent Threats Lurking in Your Child’s Devices and How To Avoid Them

Ben Halpert is a cybersecurity leader, educator, and advocate dedicated to empowering digital citizens. As a Fractional CISO, author, and the founder of Savvy Cyber Kids, he advances cyber safety and ethics. A sought-after speaker, Ben shares insights globally, shaping secure digital futures at work, school, and home. In this episode… Many parents mistakenly believe that technology companies have built-in safety controls that keep children safe online. In reality, these protections are often inadequate and misleading. From AI chatbots posing as friends to online predators targeting children through gaming platforms and social media, young users, whose brains are still developing, struggle to distinguish the differences between real human interactions and programmed responses. How can parents and caregivers proactively safeguard their children’s digital experiences while fostering healthy tech habits? Addressing these risks starts with parental oversight and consistent, age-appropriate education and guidance. Devices should be removed from kids’ bedrooms at night to prevent unsupervised use and reduce exposure to online threats. Parents should actively monitor every app, game, and online interaction, ensuring children only engage with people they know in real life. Families should also establish device-free times, like during meals, to encourage face-to-face communication and teach healthy social habits. Savvy Cyber Kids supports these efforts by providing age-appropriate educational resources, including children’s picture books, classroom activities, and digital parenting guides that help families navigate online safety. By focusing on direct education for young children and providing tools for parents and schools, the organization instills foundational privacy and cybersecurity awareness from an early age. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels welcome Ben Halpert, Founder of Savvy Cyber Kids, back to the podcast to discuss the growing digital threats facing kids’ today. Ben explains how AI chatbots are being treated as real friends, how social media messaging misleads parents, and why depending on tech companies for protection is risky. He also shares how predators use games and platforms to target kids, and how parental involvement and early education can help build safer digital habits. He shares steps parents can take to monitor and guide their children’s tech use at home, and explains how Savvy Cyber Kids helps by educating young children in schools and providing families with tools to teach online safety.
undefined
May 8, 2025 • 22min

Improving Cyber Readiness: Lessons from Real-World Investigations

Todd Renner is a seasoned cybersecurity professional with over 25 years of experience leading global cyber investigations, incident response efforts, and digital asset recovery operations. He advises clients on a wide range of cybersecurity and data privacy matters, combining deep technical knowledge with a strategic understanding of risk, compliance, and regulatory frameworks. With a distinguished background at the Federal Bureau of Investigation (FBI) and National Security Agency (NSA), Mr. Renner has contributed to national security, international cyber collaboration, and has played a key role in mentoring the next generation of cybersecurity professionals. In this episode… The rising complexity of cyber threats continues to test how businesses prepare, respond, and recover. Sophisticated threat actors are exploiting these vulnerabilities of private companies and leveraging AI tools to accelerate their attacks. Despite these dangers, many organizations hesitate to involve law enforcement when a cyber event occurs. This hesitation often stems from misconceptions about what law enforcement involvement entails, including fears of losing control over their systems or exposing sensitive company information. As a result, companies may prioritize quickly restoring operations over pursuing retribution from the attackers, leaving critical security gaps unaddressed. Collaborating with law enforcement doesn’t mean forfeiting control or exposing confidential data unnecessarily. Investigations often reveal repeated issues, including mobile device compromises, missing multifactor authentication, and failing to improve cybersecurity measures after a breach. To be better prepared, companies need to develop and practice incident response plans, ensure leadership remains involved, and build security programs that evolve beyond incident response. And, as threat actors actively use AI to accelerate data aggregation and create convincing deepfakes, companies need to start thinking about how to better detect these threats. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Todd Renner, Senior Managing Director at FTI Consulting, about how organizations are responding to modern cyber threats and where many still fall short. Todd shares why companies hesitate to engage law enforcement, how threat actors are using AI for faster targeting and impersonation, and why many businesses fail to strengthen their cybersecurity programs after a breach. He also discusses why deepfakes are eroding trust and raising new challenges for companies, and he provides practical tips for keeping both organizations and families safe from evolving threats.
undefined
May 1, 2025 • 19min

Top Takeaways From IAPP GPS 2025 and Atlanta AI Week

Jodi Daniels is the Founder and CEO of Red Clover Advisors, a privacy consultancy, that integrates data privacy strategy and compliance into a flexible, scalable approach that simplifies complex privacy challenges. A Certified Information Privacy Professional, Jodi brings over 27 years of experience in privacy, marketing, strategy, and finance across diverse sectors, working and supporting startups to Fortune 500 companies. Jodi Daniels is a national keynote speaker, and she has also been featured in CNBC, The Economist, WSJ, Forbes, Inc., and many more publications. Jodi holds a MBA and BBA from Emory University’s Goizueta Business School. Read her full bio. Justin Daniels is a corporate attorney who advises domestic and international companies on business growth, M&A, and technology transactions, with over $2 billion in closed deals. He helps clients navigate complex issues involving data privacy, cybersecurity, and emerging technologies like AI, autonomous vehicles, blockchain, and fintech. Justin partners with C-suites and boards to manage cybersecurity as a strategic enterprise risk and leads breach response efforts across industries such as healthcare, logistics, and manufacturing.  A frequent keynote speaker and media contributor, Justin has presented at top events including the RSA Conference, covering topics like cybersecurity in M&A, AI risk, and the intersection of privacy and innovation. Together, Jodi and Justin host the top ranked She Said Privacy / He Said Security Podcast and are authors of WSJ best-selling book, Data Reimagined: Building Trust One Byte at a Time. In this episode… From a major privacy summit to a regional AI event, experts across sectors are emphasizing that regulatory scrutiny is intensifying while AI capabilities and risks are accelerating. State privacy regulators are coordinating enforcement efforts, actively monitoring how companies handle privacy rights requests and whether cookie consent platforms work as they should. At the same time, AI tools are advancing rapidly with limited regulatory oversight, raising serious ethical and societal concerns. What practical lessons can businesses take from IAPP’s 2025 Global Privacy Summit and Atlanta’s AI Week to strengthen compliance, reduce risk, and prepare for what’s ahead? At the 2025 IAPP Global Privacy Summit, a major theme emerged: state privacy regulators are collaborating on enforcement more closely than ever before. When it comes to honoring privacy rights, this collaboration spans early inquiry stages through active enforcement, making it critical for businesses to establish, regularly test, and monitor their privacy rights processes. It also means that companies need to audit cookie consent platforms regularly, ensure compliance with universal opt-out signals like the Global Privacy Control, and align privacy notices with actual practices. Regulatory enforcement advisories and FAQs should be treated as essential readings to stay current on regulators' priorities. Likewise at the inaugural Atlanta AI Week, national security and ethical concerns came into sharper focus. Despite promises of localized data storage, some social media platforms and apps continue to raise alarms over foreign governments’ potential access to personal data. While experts encourage experimentation and practical application of AI tools, they are also urging businesses to remain vigilant to threats such as deepfakes, AI-driven misinformation, and the broader societal implications of unchecked AI development. In this episode of She Said Privacy/He Said Security, Jodi Daniels, Founder and CEO of Red Clover Advisors, and Justin Daniels, Shareholder and Corporate Attorney at Baker Donelson, share their top takeaways from the IAPP Global Privacy Summit 2025 and the inaugural Atlanta AI Week. Jodi highlights practical steps for improving privacy rights request handling, the importance of regularly testing cookie consent management platforms, and ensuring published privacy notices reflect actual practices. Justin discusses the ethical challenges surrounding AI's rapid growth, the national security risks tied to social media platforms, and the dangers posed by deepfake technology. Together, Jodi and Justin emphasize the importance of continuous education, collaboration, and proactive action to prepare businesses for the future of privacy and AI.
undefined
Apr 17, 2025 • 34min

From Principle to Practice: What Privacy Pros Need to Succeed

Peter Kosmala is a course developer and instructor at York University in Canada and leads its Information Privacy Program. Peter is a former marketer, technologist, lobbyist, and association leader and a current consultant, educator, and international speaker. He served the IAPP as Vice President and led the launch of the CIPP certification in the early 2000s. In this episode… As data privacy continues to evolve, privacy professionals need to stay sharp by reinforcing their foundational knowledge and refining their practical skills. It’s no longer enough to just understand and comply with regulatory requirements. Today’s privacy work also demands cultural awareness, ethical judgment, and the ability to apply privacy principles to real-world settings. How can privacy professionals expand their expertise and remain effective in an ever-changing environment? Privacy professionals can’t rely on legal knowledge alone to stay ahead. Privacy frameworks like the Fair Information Practice Principles (FIPPs), OECD Guidelines, and others offer principles that help privacy pros navigate shifting global privacy laws and emerging technologies. Privacy pros should also deepen their cultural literacy, recognizing the societal and political drivers behind laws like GDPR to align privacy practices with public expectations. Hands-on operational experience is just as important. Conducting privacy impact assessments (PIAs), responding to data subject access requests (DSARs), and developing clear communications are just a few ways privacy pros can turn knowledge into practical applications. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Peter Kosmala, Course Developer and Instructor at York University, about how privacy professionals can future-proof their skills. Peter discusses the value of foundational privacy frameworks, the tension between personalization and privacy, the limits of law-based compliance, and the growing need for ethical data use. He also explains the importance of privacy certifications, hands-on learning, and principled thinking to build programs that work in the real world.
undefined
Apr 10, 2025 • 23min

Making Privacy Tech Work: Why Process is the Game-Changer

Amanda Moore is a seasoned leader with extensive experience in privacy strategy, technology, and operations. She currently serves as the Senior Director of Privacy at DIRECTV, where she oversees the company’s privacy program with respect to technology and operations. Prior to her role at DIRECTV, she held pivotal positions at CVS Health and AT&T leading technical and business teams. Her career started in information technology but shifted to privacy before the onset of CCPA. Amanda holds the CIPM certifications and is a OneTrust Fellow of Privacy Technology. In this episode… Many organizations invest in privacy technology expecting it to deliver instant compliance, only to find that it fails to integrate with existing tools or processes. Adoption often lags when internal teams see privacy as a barrier or when tools are implemented without clearly defined goals. Choosing privacy technology before businesses understand the specific problem they’re meant to solve leads to confusion, inefficiency, and low adoption. One of the most effective ways to boost technology adoption is to start with a clear understanding of business processes and goals before introducing new privacy tech. Successful privacy programs start by mapping business processes and making small, non-disruptive backend adjustments that minimize disruption. Additionally, building internal awareness through roadshows, clear communication, and simplified privacy impact assessments helps shift perceptions and encourages teams to view privacy as a business enabler. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Amanda Moore, Senior Director of Privacy at DIRECTV, about integrating privacy technology into business operations. Amanda highlights how strong internal relationships help position privacy as a business enabler, why reframing communication to various business executives enhances support for privacy initiatives, and how measuring privacy program maturity with the use of technology provides more insight than surface-level metrics. She also discusses methods to increase adoption through internal awareness campaigns and simplified assessments, and the long-term value of reputation-building within organizations.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app