"The Data Diva" Talks Privacy Podcast

Debbie Reynolds
undefined
Sep 23, 2025 • 40min

The Data Diva E255 - Don Morron and Debbie Reynolds

Send us a textEpisode 255 – Don Morron, Founder and CEO of Highland Tech, AI Agents for EnterpriseWhat does it take to build resilience in a world of constant cyber threats? Don Morron shares strategies for adapting without losing control.On The Data Diva Talks Privacy Podcast, Debbie Reynolds, “The Data Diva,” interviews Don Morron, Founder and CEO of Highland Tech, AI Agents for Enterprise, about how executives can build resilience into their organizations in the face of a constantly evolving cyber threat landscape. Morron shares lessons from his leadership journey in cybersecurity and explains why resilience cannot be bolted on after the fact but must be embedded into enterprise systems from the very beginning.The conversation covers how AI is reshaping cybersecurity, both by enabling attackers with new tools and by empowering defenders with advanced capabilities. Morron provides practical insights into managing enterprise security operations in rapidly changing conditions without compromising organizational control. He stresses the importance of communication and collaboration across teams, highlighting how siloed approaches undermine resilience. The episode also explains why proactive governance and long-term planning are far more effective than reactive firefighting.These insights are useful not only for executives and security leaders but also for anyone interested in how organizations adapt to technology-driven risks and build strength in uncertain times.Hosted by Debbie Reynolds, “The Data Diva,” bringing global leaders together on privacy, cybersecurity, and emerging technology.Support the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC
undefined
Sep 16, 2025 • 35min

The Data Diva E254 - Bryan Lee and Debbie Reynolds

Send us a textEpisode 254 – Bryan Lee, Founder and General Partner, Privatus ConsultingWhy do privacy programs fail even when companies want to succeed? Bryan Lee explains why communication is the missing piece.On The Data Diva Talks Privacy Podcast, Debbie Reynolds, “The Data Diva,” welcomes Bryan Lee, Founder and General Partner at Privatus Consulting, to discuss why effective privacy programs succeed through strong communication rather than technical jargon. Lee explains how privacy engineering serves as a critical link between policy, compliance, and technical teams, and why clear communication is often the deciding factor in whether organizations achieve their privacy goals.He explains why many companies fail at privacy, despite genuine intent, often because coordination among stakeholders breaks down. Lee reflects on his own career path, transitioning from intelligence work to privacy consulting, and shares insights into how organizations can overcome communication barriers to develop programs that are both compliant and effective. The conversation also covers the risks of misjudging AI, particularly the mistake of treating systems as if they were human, and how this misunderstanding creates governance and operational problems.This episode offers strategies for bridging gaps, enhancing collaboration, and addressing complex issues, resonating with privacy leaders, compliance professionals, and anyone seeking to understand how effective communication drives successful outcomes in organizations.Hosted by Debbie Reynolds, “The Data Diva,” bringing global leaders together on privacy, cybersecurity, and emerging technology.Thanks to our Data Diva Talks Privacy Podcast Privacy Ambassador Sponsor, Piwik PRO. Piwik PRO is a privacy-first analytics and customer data platform that helps organizations to make informed decisions across their websites, apps, and ad campaigns. They bring an unprecedented level of data transparency, so you know exactly how your data is collected, used, and protected. It is very cool. Marketers gain valuable insights, while legal teams rest assured knowing that your client data remains protected, even as the privacy landscape evolves.  Learn more at piwik.pro. Enjoy the show.Support the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC
undefined
Sep 9, 2025 • 26min

The Data Diva E253 - Priya Gnanasekaran and Debbie Reynolds

Send us a textEpisode 253 – Priya Gnanasekaran, Senior Security Engineer at LAB3 (Australia)Can AI be both a risk and a defense? In this episode, Priya Gnanasekaran shares how organizations can manage today’s most pressing cybersecurity challenges.On The Data Diva Talks Privacy Podcast, Debbie Reynolds, “The Data Diva,” speaks with Priya Gnanasekaran, Senior Security Engineer at LAB3 (Australia), about the complex challenges cybersecurity leaders face with AI, IoT, and cloud security. Drawing on her decade-long career spanning DevSecOps, engineering, and operations, Gnanasekaran explains why cybersecurity cannot be reduced to a single field but must be understood as an amalgamation of multiple interconnected disciplines. She highlights the distinction between IT and cybersecurity and explains why this distinction is crucial for executives making risk and investment decisions.The conversation examines AI’s dual role in cybersecurity, acting both as a new attack vector and as a defensive tool that, when used responsibly, can strengthen organizational security. Gnanasekaran also details the risks of shadow AI and unmonitored enterprise use, exposing businesses to unmanaged vulnerabilities. She addresses weaknesses in IoT ecosystems, including outdated devices and hardware flaws, and argues that these cannot be solved through patchwork responses. Instead, she emphasizes the importance of “shifting left” by embedding security earlier in DevSecOps processes. Gnanasekaran stresses that cybersecurity cannot be treated like a fire department that responds only after damage has been done.This discussion offers valuable lessons on resilience, innovation, and proactive strategy, applicable not only to security professionals but also to anyone interested in understanding how digital systems can be better protected and managed.Hosted by Debbie Reynolds, “The Data Diva,” bringing global leaders together on privacy, cybersecurity, and emerging technology.Support the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC
undefined
Sep 2, 2025 • 45min

The Data Diva E252 - J Mark Bishop and Debbie Reynolds

Send us a text🎙️ Episode 252 of The Data Diva Talks Privacy Podcast – J. Mark Bishop and Debbie Reynolds, The Data Diva, on AI myths, GDPR safeguards, and energy costs of large modelsIn this episode, I speak with J. Mark Bishop, Professor of Cognitive Computing Emeritus at Goldsmiths, University of London, and Scientific Advisor to Fact360, about the myths and realities of artificial intelligence.Our discussion begins with how we describe AI itself. Mark challenges the language we use, terms like “learning” in machine learning, and argues that much of what is happening is, in fact, just optimization. We examine how anthropomorphic language about AI can create misplaced expectations, shaping how the public and policymakers perceive these technologies.We examine the tension between AI and privacy, particularly in relation to transparency. Mark reflects on the protections built into frameworks like the GDPR, which explicitly address how personal data may be used when AI makes or informs significant decisions. We examine how these rules strike a balance between individual rights and the need to utilize AI systems in business and government.Another major theme is metadata analysis. Mark shares insights from his work at Fact360, where analyzing patterns of communication without even looking at message content can reveal signals of organizational change, insider threats, or misconduct. This approach has roots in traffic analysis techniques dating back to World War II, showing how metadata continues to play a powerful role in intelligence and security.We also discuss the scaling laws of AI and whether building increasingly larger data centers will ultimately lead to artificial general intelligence. Mark strongly critiques this idea, raising concerns about the energy demands of massive AI models. He points out the environmental and ethical costs of data centers, which consume energy on the scale of entire nations, especially when many communities still live in energy poverty.This episode brings together philosophy, technology, governance, and ethics, a conversation that questions not just what AI is, but what it should be.Subscribe to “The Data Diva” Talks Privacy Podcast, now available on all major podcast directories, including Apple Podcasts, Spotify, Stitcher, iHeart Radio, and more.Hosted by Data Diva MediaDebbie Reynolds Consulting, LLC#AI #ethics #metadata #GDPR #datagovernance #sustainability#dataprivacy #datadiva #privacy #cybersecurity🎙️ 🎙️ Episode 258 of The Data Diva Talks Privacy Podcast – Jennifer Wondracek and Debbie Reynolds, The Data Diva, on legal tech, privacy risks, redaction failures, and AI detectorsIn this episode, I speak with Jennifer Wondracek, Director of the Law Library and Professor of Legal Research at St. Mary’s University School of Law, about the intersection of law, technology, and privacy in an AI-driven world.Our conversation explores the digital transformation of law practice, starting with the shift from paper-based systems to electronic filing and research tools, and how today’s wave of AI adoption parallels that earlier revolution. Jennifer explains how lawyers once resisted computers as unnecessary, only to later reaSupport the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC
undefined
Aug 26, 2025 • 39min

The Data Diva E251 - Ilia Dubovtsev and Debbie Reynolds

Send us a textIn episode 251 of The Data Diva Talks Privacy Podcast, host Debbie Reynolds, “The Data Diva,” welcomes Ilia Dubovtsev, Founder of Dub Consulting, joining from Moscow, Russia. he discussion centers on the complexities of privacy in the workplace and how emerging technologies, especially AI, are reshaping the boundaries of personal data and institutional responsibility. Ilia shares his framework for operationalizing privacy—built on the principles of accountability, fairness, and balancing interests, and explains why this model is essential when managing employee data in digitally driven environments.Ilia shares his belief that privacy is the maximum expression of individual liberty. He developed a three-pillar framework for privacy programs: accountability, balance of interest, and fairness. He explains how these principles can be applied across jurisdictions, whether in Russia, Europe, or the United States. Despite the United States’ lack of a comprehensive federal privacy law, Ilia notes that core principles like non-discrimination and transparency often serve as common ground for workplace privacy protections.Debbie and Ilia dive deep into the complexities of employee privacy, comparing regulatory and cultural variations. They acknowledge that workplace data, often governed by contract law, labor law, and surveillance practices, is uniquely sensitive because employees have fewer choices about whether and how their data is collected. The conversation shifts to the influence of AI in the workplace. Ilia envisions AI empowering employees by reducing dependency on traditional corporate structures, potentially shifting employer-employee dynamics to a more equitable “peer” relationship. He proposes a new privacy policy model that includes (1) transparent data practices, (2) distinct policy boundaries across employment stages, and (3) accessible remedies for challenging data misuse. Ilia stresses the need for literacy, both technical and legal, to protect workers and hold employers accountable.The episode concludes with a thoughtful exchange on liberty, trust, and the potential of fair AI governance. Ilia emphasizes that privacy must be preserved through principled regulation and public education, rather than a proliferation of fragmented, burdensome rules. He cites the U.S. scholarly conversation around the “duty of loyalty” and calls for frameworks that ensure both data accountability and empowerment for individuals.#EmployeePrivacy #GlobalPrivacyFramework #AIinWorkplace #DubConsulting #PrivacyRights #DigitalGovernance #DataLiberty #WorkplaceEthics #AIandHR #PrivacyLiteracySupport the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC
undefined
Aug 19, 2025 • 33min

The Data Diva E250 - Marianne Mazaud and Debbie Reynolds

Send us a textIn episode 250 of The Data Diva Talks Privacy Podcast, host Debbie Reynolds, “The Data Diva,” welcomes Marianne Mazaud, Co-Founder of AI ON US, an International Executive Summit Focused on Responsible Artificial Intelligence, co-created with Thomas Lozopone. They explore the powerful relationship between AI, privacy, and trust, emphasizing how leaders can take actionable steps to create inclusive and ethically grounded AI systems.Marianne shares insights from her extensive experience in creative performance marketing and brand protection, including how generative AI technologies have created both opportunities and new risks. She stresses the importance of privacy and inclusion in AI governance, especially in high-risk sectors like healthcare and education.The conversation moves to public trust in AI. Marianne references a study revealing widespread distrust in AI systems due to cybersecurity concerns, algorithmic bias, and lack of transparency. She highlights the need to involve more diverse voices, including individuals with disabilities and children, in the development of emerging technologies. Marianne and Debbie also examine the role of data privacy in consumer trust, citing a PricewaterhouseCoopers report showing that 83% of consumers believe data protection is essential to building trust with businesses.They compare AI regulatory landscapes across the European Union and the United States. Marianne outlines how the EU AI Act places joint responsibility on AI developers and providers, which can introduce compliance complexities, especially for small businesses. She explains how these regulations can be difficult to implement retroactively and may impact innovation when not considered early in the development process.Marianne closes by introducing the AI On Us initiative and the International Summit on Responsible AI for Executives. These efforts are designed to support leaders navigating AI governance through immersive workshops, best practices, and applied exercises. She also describes the Arborus Charter, a commitment to gender equality and inclusion in AI that has been adopted by 150 companies globally.They discuss the erosion of public trust in AI and the contributing role of biased algorithms, black-box decision-making, and regulatory fragmentation across regions. Marianne describes the uneven distribution of protections for vulnerable populations, such as children and persons with disabilities, and the failure of many AI systems to account for culturally or biologically diverse user bases. She emphasizes that privacy harms are not only about data collection but also about downstream effects and misuse, especially in sectors like healthcare, hiring, and public policy.Debbie and Marianne contrast the emerging regulatory models in the United States and the European Union, noting that the U.S. often lacks forward-looking obligations for AI developers, whereas the EU imposes preemptive risk requirements. Despite these differences, both agree that building AI systems that are trustworthy, explainable, and fair must become a global imperative. Marianne closes bySupport the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC
undefined
Aug 12, 2025 • 42min

The Data Diva E249 - Marlyse McQuillen and Debbie Reynolds

Send us a textIn episode 249 of The Data Diva Talks Privacy Podcast, host Debbie Reynolds, “The Data Diva,” welcomes Marlyse McQuillen, Vice President of Regulatory Compliance, Privacy, and AI at IntegraConnect LLC. Their conversation offers a multidimensional exploration of privacy, from professional ethics and emerging legal conflicts to education policy and AI governance.Marlyse shares her journey into privacy law, which began during her work as a corporate attorney and expanded through roles in sectors such as health care, finance, and security. She reflects on her professional evolution and her aspirations to bring her cross-industry expertise to companies, especially as organizations increasingly confront regulatory pressure in the areas of consumer data and artificial intelligence.The conversation dives into privacy issues in healthcare, where Marlyse emphasizes the risks of digital health data in a landscape that continues to shift toward value-based care. She highlights how HIPAA and HIPAA adjacent laws or obligations create complexities in addressing data outside of traditional clinical systems. This becomes especially urgent when companies face financial instability. Marlyse details the example of 23AndMe, a major bankruptcy involving genetic data, in which states raised objections to the sale of consumer health information, and the court ultimately appointed a consumer privacy ombudsman. She and Debbie underscore the long-term damage to trust when sensitive personal data is treated as a transferable asset during bankruptcy proceedings.The discussion also touches on public digital exposure through the lens of the “Coldplaygate” incident, where a viral Kiss Cam moment led to the resignation of a company CEO. Marlyse and Debbie reflect on how these seemingly lighthearted digital moments can carry real consequences, especially in an era of high surveillance and online amplification. They emphasize the importance of discretion and privacy boundaries, even in public settings.Marlyse brings a strong policy perspective, advocating for legislative updates to genetic privacy laws and more comprehensive protection for children in schools. She is actively working with the Plunk Foundation to build a digital literacy curriculum that educates young people on safe AI use and privacy rights. She envisions federal mandates for consumer data protection education as a way to create foundational awareness early in life.The episode closes with personal reflections on the rewarding yet difficult work of privacy leadership, the importance of soft skills development, and Marlyse’s creative approach to privacy education, including a song she wrote to raise awareness about data rights. Throughout, she champions practical reforms, better breach responses, and a cultural shift toward accountability in both public and private uses of technology.#DigitalHealthPrivacy #GeneticData #AIinEducation #ConsumerTrust #Coldplaygate #PrivacyEducation #PlunkFoundation #PrivacyBankruptcy #PrivacyLeadership #DigitalLiteracy #DataRightsThanks to our Data Diva Talks Privacy Podcast Privacy Ambassador Sponsor, Piwik PRO. Piwik PRO is a privacy-first analytics and customer data platform that helps organizations to make informed decisions across their websites, apps, anSupport the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC
undefined
Aug 5, 2025 • 46min

The Data Diva E248 - Damilola Adenuga-Taiwo and Debbie Reynolds

Send us a textIn episode 248 of “The Data Diva” Talks Privacy Podcast, Debbie Reynolds talks to  Damilola Adenuga-Taiwo, a cybersecurity and compliance professional with extensive expertise in payment systems, PCI DSS, ISO standards, and governance frameworks. We discuss his unconventional path into the field, beginning with teaching postgraduate technology courses and evolving into global consulting roles focused on cybersecurity risk, assessments, and compliance. Damilola explains the critical role of standards like PCI DSS in securing cardholder data, how global payment brands shaped their adoption, and why such frameworks succeed even without legal mandates.We explore the nuanced differences between privacy and cybersecurity, the challenges of implementing compliance in high-friction environments such as digital payments, and how financial institutions have effectively balanced innovation with data protection. Damilola also explores the convergence of security and privacy, illustrating how standards require organizations to consider not only what data is collected, but also why, for how long, and under what conditions it must be deleted.A major theme of the episode is the growing concern over AI misuse, ranging from deepfakes and fraud to the psychological implications of relying on generative AI daily. Damilola reflects on how tools like ChatGPT are rapidly transforming work habits, raising ethical questions about digital dependency, and blurring the line between convenience and risk. We also discuss the widening regulatory gap, the need for proactive standards, and how cybersecurity professionals can bridge the chasm between policy, practice, and public trust.This episode offers practical and philosophical insights for anyone grappling with the accelerating pace of AI, the rigor of compliance, and the evolving definitions of data responsibility. We hope for a future where robust compliance frameworks, informed users, and ethical innovation collaborate to ensure digital safety and personal autonomy.Support the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC
undefined
Jul 29, 2025 • 43min

The Data Diva E247 - Michael Robbins and Debbie Reynolds

Send us a textIn episode 247 of “The Data Diva” Talks Privacy Podcast, Debbie Reynolds talks to  Michael Robbins, Social Entrepreneur and Civic Builder, and a visionary in building human-plus-digital learning ecosystems. We discuss his decades-long journey at the intersection of education, technology, and community, from grassroots innovation to White House policy. Michael shares a compelling vision for the future of AI in education, centered on empowering individuals to create and control their own AI narratives. He introduces his data model, called DOTES (Do, Observe, Tell, Explore, Show), which captures real-world learning experiences and enables the training of personalized AI agents grounded in data integrity and digital personhood.Our conversation explores the concept of implication models, AI systems that learn from and work for people, rather than exploiting their data. Michael draws parallels between decentralized data governance and the design of AI trusts, where individuals have full control over their digital identities and contributions. We also explore the limitations of current large language models and discuss new frameworks that could rebuild AI from the ground up, centering privacy, consent, and community.Together, we envision a future where youth and adults alike use AI not as a replacement for human intelligence but as a tool for self-expression, empowerment, and democratic participation. This episode is a masterclass in AI ethics, digital sovereignty, and the urgent need to shift from extractive technologies to human-first ecosystems. We hope for a future where data privacy is not just a legal checkbox, but a fundamental principle of technological design and societal infrastructure.Support the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC
undefined
Jul 22, 2025 • 41min

The Data Diva E246 - Aparna Bhushan and Debbie Reynolds

Send us a textIn episode 246 of “The Data Diva” Talks Privacy Podcast, Debbie Reynolds talks to Aparna Bhushan, a co-host of the Rethinking Tech podcast and a seasoned data protection and governance attorney licensed in both the U.S. and Canada. Together, they explore the critical intersection of geopolitics, tech policy, and data ethics. Aparna shares her professional journey from startups to global corporations and international organizations, such as UNICEF, where her passion for ethical and practical data governance took root. The conversation explores the fast-paced and often contradictory dynamics facing governments, companies, and users in the digital age, highlighting how the collapse of traditional rules has left many institutions scrambling for direction.Debbie and Aparna discuss how companies are navigating conflicting global regulations, the growing risks of consumer backlash, and the real-world consequences of poor data decisions, such as the fallout from GM’s data broker scandal and the potential sale of sensitive genetic data in the 23andMe bankruptcy. They also address the dangers of regulation lag, scope creep, and public distrust in platforms that mishandle personal data. Aparna shares her perspective on the emerging global impact of the EU AI Act and the regulatory vacuum in the U.S., arguing that proactive privacy strategies and consumer trust are more valuable than merely checking compliance boxes.The two dive deep into the complexities of age verification laws, questioning the practicality and privacy implications of requiring IDs or weakening encryption to protect children online. They emphasize the need for innovation that respects user rights and propose creative approaches to solving systemic data challenges, including Aparna’s vision for AI systems that can audit other AI models for fairness and bias.To close the episode, Aparna shares her global privacy wish list: a more conscious, intentional user culture and a renewed investment in responsible technology development. This thoughtful and wide-ranging conversation is a must-listen for anyone interested in the ethical evolution of data governance in a rapidly shifting global landscape.Support the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox. 💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI. 👉 Join here: http://bit.ly/3Jb8S5p Debbie Reynolds Consulting, LLC

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app