

She Said Privacy/He Said Security
Jodi and Justin Daniels
This is the She Said Privacy / He Said Security podcast with Jodi and Justin Daniels. Like any good marriage, Jodi and Justin will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.
Episodes
Mentioned books

Jan 22, 2026 • 30min
How Safe Are Kids' GPS Trackers and Smartwatches?
Steve Blair is the Senior Privacy and Security Test Program Leader at Consumer Reports, where he evaluates connected devices and digital products to uncover privacy and security risks. With a background spanning early internet technology, mobile hardware, and product security, he helps consumers better understand how their data is collected, used, and protected, especially in emerging technologies designed for families and children. In this episode… Connected devices designed for kids play a growing role in how families stay connected and informed. GPS trackers, smartwatches, and other apps and tools often promise safety and convenience, yet they also raise questions about how children's data is collected, used, stored, and protected. The challenge is not whether these tools function as intended, but how they handle personal information once they are in use. How can parents gain confidence in the technology their children use every day while avoiding privacy and security risks? A practical starting point is to read privacy notices and product descriptions, then examine how devices and apps behave in practice. Reviewing default settings, questioning app permissions, and noting how easy privacy controls are to find can help parents manage risk and better understand how a company collects and handles kids' data. These considerations become especially important when children are required to use certain apps or connected devices to participate in school activities or other events. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Steve Blair, Senior Privacy and Security Test Program Leader at Consumer Reports, about privacy and security risks in kids' GPS trackers, wearables, and apps. Steve explains what Consumer Reports found when testing GPS trackers and wearables designed for children, and how hands-on testing helps parents better understand device privacy controls. He shares practical ways parents can assess app privacy and security protections, even without deep technical expertise. And Steve also shares practical privacy and security tips parents can use every day, like keeping devices updated, removing apps when they are no longer needed, and requesting data deletion when app use ends.

Jan 8, 2026 • 27min
From Manual to Automated: Building Privacy Programs That Scale
Ron De Jesus is the Field Chief Privacy Officer at Transcend, driving practical privacy governance and industry advocacy. He previously led privacy at Grindr, Tinder, and Match Group, built global programs at Tapestry and American Express, founded De Jesus Consulting, and remains an active community leader through the IAPP and LGBTQ Privacy & Tech Network. In this episode… Privacy professionals navigate a growing web of privacy regulations and emerging technologies, yet many still rely on manual processes to manage their programs. Teams might track global requirements in spreadsheets and manually triage privacy rights requests. To scale privacy programs effectively, teams need to move beyond manual approaches. So what should privacy teams consider as they adopt automated solutions? The key to scaling privacy programs efficiently lies in embracing automation and technology that aligns with an organization's broader goals. When privacy leaders secure early buy-in from stakeholders, technology decisions are more likely to support the business beyond basic compliance needs. Teams also need clarity on what they are trying to accomplish, a thorough understanding of where their data lives, and time to evaluate how new tech fits into their existing systems and workflows. Sometimes teams expect third-party privacy tools to work out of the box and solve their compliance needs. However, that is often not the case, and why companies must review and test vendor tech solutions to ensure they accurately meet company requirements. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Ron De Jesus, Field Chief Privacy Officer at Transcend, about transitioning privacy programs from manual processes to automation. Ron emphasizes the importance of internal alignment when adopting privacy technology, discusses the risks of treating privacy tools as plug-and-play compliance solutions, and highlights the need for companies to review vendor tech solutions against their specific requirements and legal obligations. He also explains how the privacy community helps shape his view of how teams operationalize privacy in practice and shares his prediction for what's in store for privacy professionals in 2026.

Dec 18, 2025 • 27min
Why Knowing Company Data is Every General Counsel's First Privacy Move
Talar Herculian Coursey, General Counsel and VP of HR at ComplyAuto, shares her journey from file clerk to legal expert in the auto industry. She emphasizes the importance of understanding data types collected by dealerships, highlighting risks from third-party vendors. Talar discusses strategies for secure communication, like encrypted messaging, and the necessity of customized, gamified training for staff to handle sensitive information. She offers practical advice for car buyers to ensure their data protection while also balancing her interests in yoga and chess.

Dec 11, 2025 • 22min
So You Got the Privacy Officer Title, Now What?
Teresa "T" Troester-Falk has over 20 years of experience building privacy programs that work when resources are limited and timelines are real. She led initiatives at DoubleClick (Google), Epsilon, Nielsen, and Nymity (TrustArc) before founding BlueSky Privacy and BlueSky PrivacyStack. Today she creates practical tools and systems that help privacy professionals step into their role with confidence and give executives decisions they can act on. Through her writing and teaching, she brings clarity to complex requirements and shows how privacy can succeed in practice. In this episode… Privacy professionals step into their roles with foundational knowledge, yet often without the support needed to apply it in practice. They are sometimes expected to build and maintain privacy programs without a budget, authority, or a clear plan. This gap creates daily uncertainty, especially for newly certified privacy professionals who enter the field with little operational experience. So how can privacy professionals move through these challenges and build programs they can defend with confidence? Building a functioning privacy program requires making decisions in gray areas and moving forward without waiting for perfect information. Privacy pros can start by focusing on high-risk areas first and documenting their decision-making process using a three-pillar approach. This framework helps professionals explain the decision they made, maintain what was decided, and defend it with evidence. Clear ownership and accountability ensure processes hold over time. With the right operational structure in place, privacy pros can move privacy programs forward even when resources are tight. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Teresa Troester-Falk, Founder of BlueSky Privacy and BlueSky PrivacyStack, about building effective privacy programs with limited resources. Teresa explains how a simple decision-making framework can help new and seasoned privacy professionals work through ambiguity. She also shares strategies for prioritizing privacy work when budgets are tight and expectations are high, and explains why establishing ownership and operational processes are essential for sustaining long-term privacy success.

Dec 4, 2025 • 29min
Where Policymaking Meets Privacy and AI Innovation
Monique Priestley is a Vermont State Representative focused on data privacy, AI, right to repair, and the future of work. Monique serves on the House Commerce & Economic Development Committee, Joint IT Oversight Committee, and multiple national tech policy task forces. She was named a 2024 EPIC Champion of Freedom. In this episode… State privacy laws are evolving faster than ever, yet the dynamics shaping them often remain out of view for most organizations. Technology shifts quickly, and the issues raised in proposed privacy and AI bills require far more research and preparation than the calendar allows. That's why lawmakers work year-round to understand these complex technologies and collaborate with their peers in other states to refine definitions and bill provisions, ensuring that appropriate privacy protections are in place. Many states entered 2025 with strong privacy bills on the table, yet progress slowed as industry counterproposals and competing drafts drew support away from stronger models, making it harder for legislators to keep consumer privacy protections intact. Vermont State Representative Monique Priestley has seen this firsthand and brings a unique lens to this dynamic, drawing on her discussions with the public and her collaborative work with lawmakers across the country. As public concerns about privacy and AI grow and privacy laws evolve, companies will need to be proactive about the steps they take to protect people's data and be clear about how those protections work. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Monique Priestley, Vermont State Representative, about the realities that shape state-level privacy and AI legislation. Monique discusses the behind-the-scenes work required to educate lawmakers and build strong, technology-informed privacy and AI bills, and what might change in the year ahead. She also shares insights into the public's rising concerns about how their data is used, highlighting the steps companies can take to build trust.

Nov 20, 2025 • 27min
Hands-On AI Skills Every Legal Team Needs
Mariette Clardy-Davis is Assistant General Counsel at Primerica, providing strategic guidance on the Securities Business. Recognizing AI competence as a professional duty, she launched "Unboxing Generative AI for In-House Lawyers" virtual workshops and an online directory empowering lawyers to move from AI overwhelm to practical application through hands-on learning. In this episode… Legal teams are turning to generative AI to speed up their work, yet many struggle with getting consistent, usable results. Learning AI skills requires hands-on practice with prompting frameworks, styling guides, and instructions that improve output quality. That's why attorneys need creative training approaches that help these skills stick and carry over into their day-to-day work. Building AI fluency isn't about mastering the technology itself. It's about shifting mindset and approach. One common challenge legal teams encounter is expecting AI to deliver consistent outputs every time, yet AI doesn't work like a copy machine. It responds through patterns, so the same prompt might produce different results. That's why creative, narrative-based training is effective for learning prompting frameworks. When attorneys pair detailed prompt instructions with gold standard examples, AI tools get the reference points they need for tone, style, and structure. Saving strong prompts into a library creates leverage and reduces the time spent rebuilding instructions for recurring tasks. This helps attorneys reduce rework, improve accuracy, and shift from basic efficiency tasks to work that supports strategy and collaboration. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Mariette Clardy-Davis, Assistant General Counsel at Primerica, about how in-house legal teams can embrace generative AI education. Mariette explains how creative, story-driven workshops make AI learning more engaging and why understanding prompting frameworks is essential for consistent results. She discusses common misconceptions lawyers have about generative AI tools and how building a task-based directory with reusable prompts helps legal teams save time on repetitive work. Mariette also explains how attorneys can use AI not just to speed up tasks but to support more substantive legal work.

Nov 13, 2025 • 25min
Adapting Cybersecurity Measures for the Age of AI
Khurram Chhipa currently serves as General Counsel at Halborn, a leading cybersecurity company in the Web3 space. With expertise spanning blockchain security, compliance, and digital risk management, he brings a unique perspective to the intersection of law and technology. Outside of work, Khurram enjoys spending time with family and friends. In this episode… Artificial intelligence is changing how cybersecurity teams detect and respond to threats. What once required manual monitoring has evolved into an adaptive solution that uses predictive modeling to identify risks sooner. While AI can strengthen security defenses, it also raises questions about accuracy and the need for human oversight. For legal and security teams working in fast-moving sectors like blockchain, AI offers efficiency yet also introduces new risks. Large language models (LLMs) can help general counsels generate contracts and prepare for negotiations, yet they require human oversight to spot and correct errors. That's why companies need to develop clear playbooks, train teams, and implement a continuous review process to ensure responsible AI use. For security teams, the same principle applies. While predictive AI tools can identify threats earlier, security teams should also test their incident response readiness through tabletop exercises and encourage employees to adopt a don't trust, verify" mindset to guard against threats like deepfakes. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Khurram Chhipa, General Counsel at Halborn, about how AI is transforming cybersecurity. Khurram explains how AI is reshaping threat detection, why human oversight is essential when using AI in legal and security contexts, and provides practical strategies for implementing safeguards. He also describes the growing AI arms race and its impact on cybersecurity, and he provides tips on how companies can mitigate AI deepfake threats through custom training and implementing advanced security measures.

Nov 6, 2025 • 35min
The Path to Restoring Trust in a Connected World
Mark Weinstein is a successful tech entrepreneur, board member, and consultant, and one of the visionary inventors of social networking. He is the author of Restoring Our Sanity Online (Wiley, 2025), a book endorsed by Sir Tim Berners-Lee and Steve Wozniak. Mark is the Founder of MeWe, the first social network with a Privacy Bill of Rights, which grew to over 20 million members. He also founded SuperFamily.com and SuperFriends.com, early social networks recognized by PC Magazine as "Top 100" sites. He is an inventor of 15 groundbreaking digital advertising patents. Mark has delivered the landmark TED Talk, "The Rise of Surveillance Capitalism." He is frequently interviewed and published in major media outlets around the world. Beyond his entrepreneurial achievements, Mark has chaired the New Mexico Accountancy Board and served as an Adjunct Marketing Professor at the University of New Mexico. He holds an MBA from UCLA's Anderson School of Management. In this episode… The internet began as a way to connect family, friends, and communities. Over time, platforms shifted towards surveillance capitalism, where users' personal information can be monetized and people can be targeted and even manipulated. Social media and AI now shape what people see, think, and buy, while algorithms quietly learn how to influence our choices. As technology advances, how can companies and individuals alike protect privacy and rebuild trust in the systems that connect us? As one of the pioneers of social networking, Mark Weinstein has seen this transformation firsthand. Early models were built around community and connection, while later models monetized personal information for targeting and profit. The next phase focuses on stronger privacy controls, data portability, and user choice. Building safer digital experiences means companies need to avoid unnecessary data collection and manipulative design tactics, and to communicate transparently about how personal information is used and shared. Individuals can also play a role by supporting user ID verification to make social media safer and by teaching children critical thinking skills to help them combat misinformation and manipulation online. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Mark Weinstein, tech entrepreneur, author, board member, and consultant, about rethinking privacy and control in the digital age. Mark reflects on the lessons learned from early social network models and discusses the evolution of the internet from connection-driven communities to surveillance capitalism, explaining how current models exploit user data. He explores his vision for Web4 and its new approach centered on data ownership and portability. He also offers practical advice for protecting children from online harms and the importance of fostering critical thinking in the age of AI.

Oct 30, 2025 • 31min
AI, Privacy, and the General Counsel's Role in Responsible Innovation
Lane Blumenfeld is the Chief Legal Officer for Data Driven Holdings (DDH). Through its portfolio companies, headed by TEAM VELOCITY, DDH has become a market leader of data-powered technology and marketing solutions for the automotive industry. Lane was named a Top 50 Corporate Counsel by OnCon. Lane holds a JD from Yale Law School, an MA in international affairs from the Johns Hopkins University School of Advanced International Studies (SAIS), and a BA magna cum laude from Cornell University. In this episode… The pressure on companies to deliver faster, more personalized digital experiences often conflicts with their privacy and security obligations. General counsels sit at the center of this tension, balancing the business value of personal data with the need to protect it. That's why their involvement early in product development is essential. Working with product and engineering teams from the start allows legal teams to build safeguards into design, before products and services reach customers. So, how can companies find the right balance without compromising privacy and security? AI also adds a new layer of complexity. As companies use it to analyze data, refine customer targeting, and generate marketing content, legal teams and general counsels are adapting to evolving regulations. While clean, reliable data is essential, general counsels need to evaluate accuracy and bias to ensure responsible use. Even as AI advances, fundamental privacy and security principles still apply. That's why it's important for organizations to take ownership of their privacy practices, especially when it comes to privacy notices and vendor relationships. Companies shouldn't depend on generic privacy notices or third-party templates that fail to reflect their actual data handling practices. Vendor contracts need equal attention, with privacy and cybersecurity provisions that mirror company commitments to consumers, since one vendor's mistake can create significant risk. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Lane Blumenfeld, Chief Legal Officer at Data Driven Holdings, about how general counsels can balance innovation with privacy and security. Lane explains how early legal involvement helps embed privacy and security into product design. He emphasizes that clear, accurate privacy notices and well-structured vendor contracts are essential for reducing privacy and security risks and maintaining accountability. And, as AI reshapes compliance obligations, Lane highlights the need for defined ownership across legal, product, and vendor teams and why companies sometimes need to walk away from vendors that expose them to excessive risk.

Oct 23, 2025 • 31min
Accelerating AI Adoption Through AI Week
Summer Crenshaw is the Co-Founder and CEO of the Enterprise Technology Association (ETA), the national leader in AI and emerging technology adoption. She serves on multiple advisory boards and champions innovation, education, and responsible technology adoption. A seasoned tech entrepreneur and strategist, she previously co-founded Tilr, an AI-powered job marketplace recognized by CNBC, Forbes, and VentureBeat. Summer has been featured in major outlets and spoken on national stages, including DisruptHR and Dreamforce. In this episode… Business leaders across industries are responding to AI with a mix of excitement, fear, and uncertainty. Many want to use AI tools to accelerate business goals, yet they also worry about the risks and how these tools could disrupt jobs and existing roles. To move forward, companies need to focus on continuous learning that helps people understand and apply AI responsibly. So how can companies close the skill gaps that limit progress while ensuring their teams continue learning as AI evolves? Accelerating responsible AI adoption starts with education that connects people, communities, and industries. Organizations like the Enterprise Technology Association are helping bridge that gap through AI Week, a fast-moving initiative that brings together local leaders, educators, and companies to share insights for responsible AI adoption. These community-driven gatherings are designed around the industries and priorities of each city, creating programming that makes AI accessible to both technical and non-technical audiences. For companies to succeed, they also need to rethink how they approach governance. Rather than viewing it as a brake that hinders progress, it should serve as a steering wheel that guides teams with implementation and helps them achieve their goals. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Summer Crenshaw, Co-founder and CEO of the Enterprise Technology Association (ETA), about how businesses can accelerate responsible AI adoption through education and collaboration. Summer shares how AI Week launched in just five weeks and scaled across multiple cities by empowering local leaders and creating accessible AI programming. She explains why governance should enable rather than hinder AI implementation and what separates the 5% of successful AI projects from those that fail. Summer also discusses how to prepare for AI in 2026, addressing the shift from theory to measuring human impact.


