The Shifting Privacy Left Podcast cover image

The Shifting Privacy Left Podcast

Latest episodes

undefined
Jul 23, 2024 • 39min

S3E15: 'New Certification: Enabling Privacy Engineering in AI Systems' with Amalia Barthel & Eric Lybeck

In this episode, I'm joined by Amalia Barthel, founder of Designing Privacy, a consultancy that  helps businesses integrate privacy into business operations; and Eric Lybeck, a seasoned independent privacy engineering consultant with over two decades of experience in cybersecurity and privacy. Eric recently served as Director of Privacy Engineering at Privacy Code. Today, we discuss: the importance of more training for privacy engineers on AI system enablement; why it's not enough for privacy professionals to solely focus on AI governance; and how their new hands-on course, "Privacy Engineering in AI Systems Certificate program," can fill this need. Throughout our conversation, we explore the differences between AI system enablement and AI governance and why Amalia and Eric were inspired to develop this certification program. They share examples of what is covered in the course and outline the key takeaways and practical toolkits that enrollees will get - including case studies, frameworks, and weekly live sessions throughout. Topics Covered: How AI system enablement differs from AI governance and why we should focus on AI as part of privacy engineering Why Eric and Amalia designed an AI systems certificate course that bridges the gaps between privacy engineers and privacy attorneysThe unique ideas and practices presented in this course and what attendees will take away Frameworks, cases, and mental models that Eric and Amalia will cover in their courseHow Eric & Amalia structured the Privacy Engineering in AI Systems Certificate program's coursework The importance of upskilling for privacy engineers and attorneysResources Mentioned:Enroll in the 'Privacy Engineering in AI Systems Certificate program' (Save $300 with promo code: PODCAST300 - enter this into the Inquiry Form instead of directly purchasing the course)Read: 'The Privacy Engineer's Manifesto'Take the free European Commission's course, 'Understanding Law as Code'Guest Info: Connect with Amalia on LinkedInConnect with Eric on LinkedInLearn about Designing PrivacySend us a text TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Jun 25, 2024 • 48min

S3E14: 'Why We Need Fairness Enhancing Technologies Rather Than PETs' with Gianclaudio Malgieri (Brussels Privacy Hub)

Today, I chat with Gianclaudio Malgieri, an expert in privacy, data protection, AI regulation, EU law, and human rights. Gianclaudio is an Associate Professor of Law at Leiden University, the Co-director of the Brussels Privacy Hub, Associate Editor of the Computer Law & Security Review, and co-author of the paper "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness". In our conversation, we explore this paper and why privacy-enhancing technologies (PETs) are essential but not enough on their own to address digital policy challenges.Gianclaudio explains why PETs alone are insufficient solutions for data protection and discusses the obstacles to achieving fairness in data processing – including bias, discrimination, social injustice, and market power imbalances. We discuss data alteration techniques such as anonymization, pseudonymization, synthetic data, and differential privacy in relation to GDPR compliance. Plus, Gianclaudio highlights the issues of representation for minorities in differential privacy and stresses the importance of involving these groups in identifying bias and assessing AI technologies. We also touch on the need for ongoing research on PETs to address these challenges and share our perspectives on the future of this research. Topics Covered: What inspired Gianclaudio to research fairness and PETsHow PETs are about power and controlThe legal / GDPR and computer science perspectives on 'fairness'How fairness relates to discrimination, social injustices, and market power imbalances How data obfuscation techniques relate to AI / ML How well the use of anonymization, pseudonymization, and synthetic data techniques address data protection challenges under the GDPRHow the use of differential privacy techniques may led to unfairness Whether the use of encrypted data processing tools and federated and distributed analytics achieve fairness 3 main PET shortcomings and how to overcome them: 1) bias discovery; 2) harms to people belonging to protected groups and individuals autonomy; and 3) market imbalances.Areas that warrant more research and investigation Resources Mentioned:Read: "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness"Guest Info: Connect with Gianclaudio on LinkedInLearn more about Brussles Privacy HubSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Jun 18, 2024 • 52min

S3E13: 'Building Safe AR / VR/ MR / XR Technology" with Spatial Computing Pioneer Avi Bar Zeev (XR Guild)

In this episode, I had the pleasure of talking with Avi Bar-Zeev, a true tech pioneer and the Founder and President of The XR Guild. With over three decades of experience, Avi has an impressive resume, including launching Disney's Aladdin VR ride, developing Second Life's 3D worlds, co-founding Keyhole (which became Google Earth), co-inventing Microsoft's HoloLens, and contributing to the Amazon Echo Frames. The XR Guild is a nonprofit organization that promotes ethics in extended reality (XR) through mentorship, networking, and educational resources. Throughout our conversation, we dive into privacy concerns in augmented reality (AR), virtual reality (VR), and the metaverse, highlighting increased data misuse and manipulation risks as technology progresses. Avi shares his insights on how product and development teams can continue to be innovative while still upholding responsible, ethical standards with clear principles and guidelines to protect users' personal data. Plus, he explains the role of eye-tracking technology and why he advocates classifying its data as health data. We also discuss the challenges of anonymizing biometric data, informed consent, and the need for ethics training in all of the tech industry. Topics Covered: The top privacy and misinformation issues that Avi has noticed when it comes to AR, VR, and metaverse dataWhy Avi advocates for classifying eye tracking data as health data The dangers of unchecked AI manipulation and why we need to be more aware and in control of our online presence The ethical considerations for experimentation in highly regulated industriesWhether it is possible to anonymize VR and AR dataWays these product and development teams can be innovative while maintaining ethics and avoiding harm AR risks vs VR risksAdvice and privacy principles to keep in mind for technologists who are building AR and VR systems Understanding The XR Guild Resources Mentioned:Read: The Battle for Your Brain: Defending the Right to Think Freely in the Age of NeurotechnologyRead: Our Next RealityGuest Info: Connect with Avi on LinkedInCheck out the XR GuildLearn about Avi's Consulting ServicesSend us a text Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnTRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Jun 4, 2024 • 45min

S3E12: 'How Intentional Experimentation in A/B Testing Supports Privacy' with Matt Gershoff (Conductrics)

Matt Gershoff, Co-founder of Conductrics, discusses A/B testing and data collection. He emphasizes intentional data collection to support privacy and shares insights on the value of experimentation. Topics include minimizing privacy risks, data collection processes, and the importance of attending privacy conferences.
undefined
Apr 30, 2024 • 54min

S3E11: 'Decision-Making Governance & Design: Combating Dark Patterns with Fair Patterns' with Marie Potel-Saville (Amurabi & FairPatterns)

In this episode, Marie Potel-Saville joins me to shed light on the widespread issue of dark patterns in design. With her background in law, Marie founded the 'FairPatterns' project with her award-winning privacy and innovation studio, Amurabi, to detect and fix large-scale dark patterns. Throughout our conversation, we discuss the different types of dark patterns, why it is crucial for businesses to prevent them from being coded into their websites and apps, and how designers can ensure that they are designing fair patterns in their projects.Dark patterns are interfaces that deceive or manipulate users into unintended actions by exploiting cognitive biases inherent in decision-making processes. Marie explains how dark patterns are harmful to our economic and democratic models, their negative impact on individual agency, and the ways that FairPatterns provides countermeasures and safeguards against the exploitation of people's cognitive biases. She also shares tips for designers and developers for designing and architecting fair patterns.Topics Covered: Why Marie shifted her career path from practicing law to deploying and lecturing on Legal UX design & combatting Dark Patterns at AmurabiThe definition of ‘Dark Patterns’ and the difference between them and ‘deceptive patterns’What motivated Marie to found FairPatterns.com and her science-based methodology to combat dark patternsThe importance of decision making governance Why execs should care about preventing dark patterns from being coded into their websites, apps, & interfacesHow dark patterns exploit our cognitive biases to our detrimentWhat global laws say about dark patternsHow dark patterns create structural risks for our economies & democratic modelsHow "Fair Patterns" serve as countermeasures to Dark PatternsThe 7 categories of Dark Patterns in UX design & associated countermeasures Advice for designers & developers to ensure that they design & architect Fair Patterns when building products & featuresHow companies can boost sales & gain trust with Fair Patterns Resources to learn more about Dark Patterns & countermeasuresGuest Info: Connect with Marie on LinkedInLearn more about AmurabiCheck out FairPatterns.comResources Mentioned:Learn about the 7 Stages of Action ModelTake FairPattern's course: Dark Patterns 101 Read Deceptive Design PatternsListen to FairPatterns' Fighting DarSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Apr 9, 2024 • 40min

S3E10: 'How a Privacy Engineering Center of Excellence Shifts Privacy Left' with Aaron Weller (HP)

In this episode, I sat down with Aaron Weller, the Leader of HP's Privacy Engineering Center of Excellence (CoE), focused on providing technical solutions for privacy engineering across HP's global operations. Throughout our conversation, we discuss: what motivated HP's leadership to stand up a CoE for Privacy Engineering; Aaron's approach to staffing the CoE; how a CoE's can shift privacy left in a large, matrixed organization like HP's; and, how to leverage the CoE to proactively manage privacy risk.Aaron emphasizes the importance of understanding an organization's strategy when creating a CoE and shares his methods for gathering data to inform the center's roadmap and team building. He also highlights the great impact that a Center of Excellence can offer and gives advice for implementing one in your organization. We touch on the main challenges in privacy engineering today and the value of designing user-friendly privacy experiences. In addition, Aaron provides his perspective on selecting the right combination of Privacy Enhancing Technologies (PETs) for anonymity, how to go about implementing PETs, and the role that AI governance plays in his work. Topics Covered: Aaron’s deep privacy and consulting background and how he ended up leading HP's Privacy Engineering Center of Excellence The definition of a "Center of Excellence" (CoE) and how a Privacy Engineering CoE can drive value for an organization and shift privacy leftWhat motivates a company like HP to launch a CoE for Privacy Engineering and what it's reporting line should beAaron's approach to creating a Privacy Engineering CoE roadmap; his strategy for staffing this CoE; and the skills & abilities that he soughtHow HP's Privacy Engineering CoE works with the business to advise on, and select, the right PETs for each business use caseWhy it's essential to know the privacy guarantees that your organization wants to assert before selecting the right PETs to get you thereLessons Learned from setting up a Privacy Engineering CoE and how to get executive sponsorshipThe amount of time that Privacy teams have had to work on AI issues over the past year, and advice on preventing burnoutAaron's hypothesis about the value of getting an early handle on governance over the adoption of innovative technologiesThe importance of being open to continuous learning in the field of privacy engineering Guest Info: Connect with Aaron on LinkedInLearn about HP's Privacy Engineering Center of ExcellenceReview the OWASP Machine Learning Security Top 10Review the OWASP Top 10 for LLM ApplicationsSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Apr 2, 2024 • 43min

S3E9: 'Building a Culture of Privacy & Achieving Compliance without Sacrificing Innovation' with Amaka Ibeji (Cruise)

Today, I’m joined by Amaka Ibeji, Privacy Engineer at Cruise where she designs and implements robust privacy programs and controls. In this episode, we discuss Amaka's passion for creating a culture of privacy and compliance within organizations and engineering teams. Amaka also hosts the PALS Parlor Podcast, where she speaks to business leaders and peers about privacy, AI governance, leadership, and security and explains technical concepts in a digestible way. The podcast aims to enable business leaders to do more with their data and provides a way for the community to share knowledge with one other.In our conversation, we touch on her career trajectory from security engineer to privacy engineer and the intersection of cybersecurity, privacy engineering, and AI governance. We highlight the importance of early engagement with various technical teams to enable innovation while still achieving privacy compliance. Amaka also shares the privacy-enhancing technologies (PETs) that she is most excited about, and she recommends resources for those who want to learn more about strategic privacy engineering. Amaka emphasizes that privacy is a systemic, 'wicked problem' and offers her tips for understanding and approaching it. Topics Covered:How Amaka's compliance-focused experience at Microsoft helped prepare her for her Privacy Engineering role at CruiseWhere privacy overlaps with the development of AI Advice for shifting privacy left to make privacy stretch beyond a compliance exerciseWhat works well and what doesn't when building a 'Culture of Privacy'Privacy by Design approaches that make privacy & innovation a win-win rather than zero-sum gamePrivacy Engineering trends that Amaka sees; and, the PETs about which she's most excitedAmaka's Privacy Engineering resource recommendations, including: Hoepman's "Privacy Design Strategies" book;The LINDDUN Privacy Threat Modeling Framework; andThe PLOT4AI Framework"The PALS Parlor Podcast," focused on Privacy Engineering, AI Governance, Leadership, & SecurityWhy Amaka launched the podcast;Her intended audience; andTopics that she plans to cover this yearThe importance of collaboration; building a community of passionate privacy engineers, and addressing the systemic issue of privacy Guest Info & Resources:Follow Amaka on LinkedInListen to The PALS Parlor PodcastRead Jaap-Henk Hoepman's "Privacy Design Strategies (The Little Blue Book)"Read Jason Cronk's "Strategic Privacy by Design, 2nd Edition"Check out The LINDDUN Privacy Threat Modeling FrameworkCheck out The Privacy Library of Threats for Artificial Intelligence (Send us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Mar 26, 2024 • 1h 16min

S3E8: 'Recent FTC Enforcement: What Privacy Engineers Need to Know' with Heidi Saas (H.T. Saas)

In this week's episode, I am joined by Heidi Saas, a privacy lawyer with a reputation for advocating for products and services built with privacy by design and against the abuse of personal data. In our conversation, she dives into recent FTC enforcement actions, analyzing five FTC actions and some enforcement sweeps by Colorado & Connecticut. Heidi shares her insights on the effect of the FTC enforcement actions and what privacy engineers need to know, emphasizing the need for data management practices to be transparent, accountable, and based on affirmative consent. We cover the role of privacy engineers in ensuring compliance with data privacy laws; why 'browsing data' is 'sensitive data;' the challenges companies face regarding data deletion; and the need for clear consent mechanisms, especially with the collection and use of location data. We also discuss the need to audit the privacy posture of products and services - which includes a requirement to document who made certain decisions - and how to prioritize risk analysis to proactively address risks to privacy.Topics Covered: Heidi’s journey into privacy law and advocacy for privacy by design and defaultHow the FTC brings enforcement actions, the effect of their settlements, and why privacy engineers should pay closer attentionCase 1: FTC v. InMarket Media - Heidi explains the implication of the decision: where data that are linked to a mobile advertising identifier (MAID) or an individual's home are not considered de-identifiedCase 2: FTC v. X-Mode Social / OutLogic - Heidi explains the implication of the decision, focused on: affirmative express consent for location data collection; definition of a 'data product assessment' and audit programs; and data retention & deletion requirementsCase 3: FTC v. Avast - Heidi explains the implication of the decision: 'browsing data' is considered 'sensitive data'Case 4: The People (CA) v. DoorDash - Heidi explains the implications of the decision, based on CalOPPA: where companies that share personal data with one another as part of a 'marketing cooperative' are, in fact, selling of dataHeidi discusses recent State Enforcement Sweeps for privacy, specifically in Colorado and Connecticut and clarity around breach reporting timelinesThe need to prioritize independent third-party audits for privacyCase 5: FTC v. Kroger - Heidi explains why the FTC's blocking of Kroger's merger with Albertson's was based on antitrust and privacy harms given the sheer amount of personal data that they processTools and resources for keeping up with FTC cases and connecting with your privacy community Guest Info: Follow Heidi on LinkedInRead (book):  'Means of Control: How the Hidden Alliance of Tech and Government is Creating a New American Surveillance State'Send us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Mar 19, 2024 • 43min

S3E7: 'Personal CRM: Embracing Digital Minimalism & Privacy Empowerment' with Chris Zeunstrom (Yorba)

Chris Zeunstrom, CEO of Ruca and Yorba, discusses building a privacy-first company, the digital minimalist movement, and decentralized identity/storage. Yorba's features, collaboration with Consumer Reports, and the importance of decentralized solutions in enhancing privacy and security are highlighted. Future plans for Yorba's data management tools are also teased.
undefined
Mar 5, 2024 • 54min

S3E6: 'Keys to Good Privacy Implementation: Exploring Anonymization, Consent, & DSARs' with Jake Ottenwaelder (Integrative Privacy)

In this week's episode, I sat down with Jake Ottenwaelder,  Principal Privacy Engineer at Integrative Privacy LLC. Throughout our conversation, we discuss Jake’s holistic approach to privacy implementation that considers business, engineering, and personal objectives, as well as the role of anonymization, consent management, and DSAR processes for greater privacy. Jake believes privacy implementation must account for the interconnectedness of privacy technologies and human interactions. He highlights what a successful implementation looks like and the negative consequences when done poorly. We also dive into the challenges of implementing privacy in fast-paced, engineering-driven organizations. We talk about the complexities of anonymizing data (a very high bar) and he offers valuable suggestions and strategies for achieving anonymity while making the necessary resources more accessible. Plus, Jake shares his advice for organizational leaders to see themselves as servant-leaders, leaving a positive legacy in the field of privacy. Topics Covered: What inspired Jake’s initial shift from security engineering to privacy engineering, with a focus on privacy implementationHow Jake's previous role at Axon helped him shift his mindset to privacyJake’s holistic approach to implementing privacy The qualities of a successful implementation and the consequences of an unsuccessful implementationThe challenges of implementing privacy in large organizations Common blockers to the deployment of anonymizationJake’s perspective on using differential privacy techniques to achieve anonymityCommon blockers to implementing consent management capabilitiesThe importance of understanding data flow & lineage, and auditing data deletion Holistic approaches to implementing a streamlined and compliant DSAR process with minimal business disruption Why Jake believes it's important to maintain a servant-leader mindset in privacyGuest Info: Connect with Jake on LinkedInIntegrative Privacy LLCSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode