The Shifting Privacy Left Podcast cover image

The Shifting Privacy Left Podcast

Latest episodes

undefined
Nov 7, 2023 • 50min

S2E34: "Embedding Privacy by Design & Threat Modeling for AI" with Isabel Barberá (Rhite & PLOT4ai)

This week’s guest is Isabel Barberá, Co-founder, AI Advisor, and Privacy Engineer at Rhite , a consulting firm specializing in responsible and trustworthy AI and privacy engineering, and creator of The Privacy Library Of Threats 4 Artificial Intelligence Framework and card game. In our conversation, we discuss: Isabel’s work with privacy-by-design, privacy engineering, privacy threat modeling, and building trustworthy AI; and info about Rhite’s forthcoming Self-Assessment Open-Source framework for AI maturity, SARAI®. As we wrap up the episode, Isabel shares details about PLOT4ai, her AI threat modeling framework and card game created based on a library of threats for artificial intelligence. Topics Covered:How Isabel became interested in privacy engineering, data protection, privacy by design, threat modeling, and trustworthy AIHow companies are thinking (or not) about incorporating privacy-by-design strategies & tactics and privacy engineering approaches within their orgs todayWhat steps can be taken so companies start investing in privacy engineering approaches; and whether AI has become a driver for such approaches.Background on Isabel’s company, Rhite, and its mission to build responsible solutions for society and its individuals using a technical mindset. What “Responsible & Trustworthy AI” means to Isabel The 5 core values that make up the acronym, R-H-I-T-E, and why they’re important for designing and building products & services.Isabel's advice for organizations as they approach AI risk assessments, analysis, & remediation The steps orgs can take in order to  build responsible AI products & servicesWhat Isabel hopes to accomplish through Rhite's new framework: SARAI® (for AI maturity), an open source AI Self-Assessment Tool and Framework, and an extension the Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai) Framework (i.e., a library of AI risks)What motivated Isabel to focus on threat modeling for privacyHow PLOT4ai builds on LINDDUN (which focuses on software development) and extends threat modeling to the AI lifecycle stages: Design, Input, Modeling, & OutputHow Isabel’s experience with the LINDDUN Go card game inspired her to develop of a PLOT4ai card game to make it more accessible to teams.Isabel calls for collaborators to contribute to the PLOT4ai open source database of AI threats as the community grows.Resources Mentioned:Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai)PLOT4ai's Github Threat Repository"Threat Modeling Generative AI Systems with PLOT4ai”  Self-Assessment for Responsible AI (SARAI®)LINDDUN Privacy Threat Model Framework"S2E19: Privacy Threat Modeling - Mitigating Privacy Threats in Software with Kim Wuyts (KU Leuven)”"Data Privacy: a runbook for engineers"Guest Info:Send us a text Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Oct 31, 2023 • 56min

S2E33: "Using Privacy Code Scans to Shift Left into DevOps" with Vaibhav Antil (Privado)

This week, I sat down with Vaibhav Antil ('Vee'), Co-founder & CEO at Privado, a privacy tech platform that's leverages privacy code scanning & data mapping to bridge the privacy engineering gap.  Vee shares his personal journey into privacy, where he started out in Product Management and saw need for privacy automation in DevOps. We discuss obstacles created by the rapid pace of engineering teams and a lack of a shared vocabulary with Legal / GRC. You'll learn how code scanning enables privacy teams to move swiftly and avoid blocking engineering. We then discuss the future of privacy engineering, its growth trends, and the need for cross-team collaboration. We highlight the importance of making privacy-by-design programmatic and discuss ways to scale up privacy reviews without stifling product innovation. Topics Covered:How Vee moved from Product Manager to Co-Founding Privado, and why he focused on bringing Privacy Code Scanning to market.What it means to "Bridge the Privacy Engineering Gap" and 3 reasons why Vee believes the gap exists.How engineers can provide visibility into personal data collected and used by applications via Privacy Code Scans.Why engineering teams should 'shift privacy left' into DevOps.How a Privacy Code Scanner differs from traditional static code analysis tools in security.How Privado's Privacy Code Scanning & Data Mapping capabilities (for the SDLC) differ from personal data discovery, correlation, & data mapping tools (for the data lifecycle).How Privacy Code Scanning helps engineering teams comply with new laws like Washington State's 'My Health My Data Act.'A breakdown of  Privado’s FREE "Technical Privacy Masterclass."Exciting features on Privado’s roadmap, which support its vision to be the platform for collaboration between privacy operations & engineering teams.Privacy engineering  trends and Vee’s predictions for the next two years. Privado Resources Mentioned:Free Course: "Technical Privacy Masterclass" (led by Nishant Bhajaria)Guide: Introduction to Privacy Code ScanningGuide: Code Scanning Approach to Data MappingSlack: Privado's Privacy Engineering CommunityOpen Source Tool: Play Store Data Safety Report BuilderGuest Info:Connect with Vee on LinkedInCheck out Privado's websiteSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Oct 24, 2023 • 49min

S2E32: "Privacy Red Teams, Protecting People & 23andme's Data Leak" with Rebecca Balebako (Balebako Privacy Engineer)

This week’s guest is Rebecca Balebako,  Founder and Principal Consultant at Balebako Privacy Engineer, where she enables data-driven organizations to build the privacy features that their customers love. In our conversation, we discuss all things privacy red teaming, including: how to disambiguate adversarial privacy tests from other software development tests; the importance of privacy-by-infrastructure; why privacy maturity influences the benefits received from investing in privacy red teaming; and why any database that identifies vulnerable populations should consider adversarial privacy as a form of protection. We also discuss the 23andMe security incident that took place in October 2023 and affected over 1 mil Ashkenazi Jews (a genealogical ethnic group). Rebecca brings to light how Privacy Red Teaming and privacy threat modeling may have prevented this incident.  As we wrap up the episode, Rebecca gives her advice to Engineering Managers looking to set up a Privacy Red Team and shares key resources. Topics Covered:How Rebecca switched from software development to a focus on privacy & adversarial privacy testingWhat motivated Debra to shift left from her legal training to privacy engineeringWhat 'adversarial privacy tests' are; why they're important; and how they differ from other software development testsDefining 'Privacy Red Teams' (a type of adversarial privacy test) & what differentiates them from 'Security Red Teams'Why Privacy Red Teams are best for orgs with mature privacy programsThe 3 steps for conducting a Privacy Red Team attackHow a Red Team differs from other privacy tests like conducting a vulnerability analysis or managing a bug bounty programHow 23andme's recent data leak, affecting 1 mil Ashkanazi Jews, may have been avoided via Privacy Red Team testingHow BigTech companies are staffing up their Privacy Red TeamsFrugal ways for small and mid-sized organizations to approach adversarial privacy testingThe future of Privacy Red Teaming and whether we should upskill security engineers or train privacy engineers on adversarial testingAdvice for Engineer Managers who seek to set up a Privacy Red Team for the first timeRebecca's Red Teaming resources for the audienceResources Mentioned:Listen to: "S1E7: Privacy Engineers: The Next Generation" with Lorrie Cranor (CMU)Review Rebecca's Red Teaming Resources Guest Info:Connect with Rebecca on LinkedInVisit Balebako Privacy Engineer's websiteSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Oct 10, 2023 • 52min

S2E31: "Leveraging a Privacy Ontology to Scale Privacy Processes" with Steve Hickman (Epistimis)

This week’s guest is Steve Hickman, the founder of Epistimis, a privacy-first process design tooling startup that evaluate rules and enables the fixing of privacy issues before they ever take effect. In our conversation, we discuss: why the biggest impediment to protecting and respecting privacy within organizations is the lack of a common language; why we need a common Privacy Ontology in addition to a Privacy Taxonomy; Epistimis' ontological approach and how it leverages semantic modeling for privacy rules checking; and, examples of how Epistimis Privacy Design Process tooling complements privacy tech solutions on the market, not compete with them.Topics Covered:How Steve’s deep engineering background in aerospace, retail, telecom, and then a short stint at Meta, led him to found Epistimis Why its been hard for companies to get privacy right at scaleHow Epistimis leverages 'semantic modeling' for rule checking and how this helps to scale privacy as part of an ontological approachThe definition of a Privacy Ontology and Steve's belief that all should use one for common understanding at all levels of the businessAdvice for designers, architects, and developers when it comes to creating and implementing privacy ontology, taxonomies & semantic modelsHow to make a Privacy Ontology usableHow Epistimis' process design tooling work with discovery and mapping platforms like BigID & Secuvy.aiHow Epistimis' process design tooling work along with a platform like Privado.ai, which scans a company's product code and then surfaces privacy risks in the code and detects processing activities for creating dynamic data mapsHow Epistimis' process design tooling works with PrivacyCode, which has a library of privacy objects, agile privacy implementations (e.g., success criteria & sample code), and delivers metrics on the privacy engineering process is goingSteve calls for collaborators who are interested in POCs and/or who can provide feedback on Epistimis' PbD processing toolingSteve describes what's next on the Epistimis roadmap, including wargamingResources Mentioned:Read Dan Solove's article, "Data is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Data"Guest Info:Connect with Steve on LinkedInReach out to Steve via EmailLearn more about EpistimisSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Oct 3, 2023 • 1h

S2E30: "LLMs, Knowledge Graphs, & GenAI Architectural Considerations" with Shashank Tiwari (Uno)

Shashank Tiwari discusses ML/AI, temporal knowledge graphs, and Generative AI's impact on privacy. He emphasizes the need for architectural privacy considerations when using Generative AI and predicts enterprise adoption. The conversation delves into the benefits of temporal knowledge graphs and LLMs in creating causal discovery inference models to prevent privacy issues.
undefined
Sep 26, 2023 • 55min

S2E29 - "Synthetic Data in AI: Challenges, Techniques & Use Cases" with Andrew Clark and Sid Mangalik (Monitaur)

This week I welcome Dr. Andrew Clark, Co-founder & CTO of Monitaur, a trusted domain expert on the topic of machine learning, auditing and assurance; and Sid Mangalik, Research Scientist at Monitaur and PhD student at Stony Brook University. I discovered Andrew and Sid's new podcast show, The AI Fundamentalists Podcast. I very much enjoyed their lively episode on Synthetic Data & AI, and am delighted to introduce them to my audience of privacy engineers. In our conversation, we explore why data scientists must stress test their model validations, especially for consequential systems that affect human safety and reliability. In fact, we have much to learn from the aerospace engineering field who has been using ML/AI since the 1960s. We discuss the best and worst use cases for using synthetic data'; problems with LLM-generated synthetic data; what can go wrong when your AI models lack diversity; how to build fair, performant systems; & synthetic data techniques for use with AI.Topics Covered:What inspired Andrew to found Monitaur and focus on AI governanceSid’s career path and his current PhD focus on NLPWhat motivated Andrew & Sid to launch their podcast, The AI FundamentalistsDefining 'synthetic data' & why academia takes a more rigorous approach to synthetic data than industryWhether the output of LLMs are synthetic data & the problem with training LLM base models with this dataThe best and worst 'synthetic data' use cases for ML/AIWhy the 'quality' of input data is so important when training AI models Thoughts on OpenAI's announcement that it will use LLM-generated synthetic data; and critique of OpenAI's approach, the AI hype machine, and the problems with 'growth hacking' corner-cuttingThe importance of diversity when training AI models; using 'multi-objective modeling' for building fair & performant systemsAndrew unpacks the "fairness through unawareness fallacy"How 'randomized data' differs from 'synthetic data'4 techniques for using synthetic data with ML/AI: 1) the Monte Carlo method; 2) Latin hypercube sampling; 3) gaussian copulas; & 4) random walkingWhat excites Andrew & Sid about synthetic data and how it will be used with AI in the futureResources Mentioned:Check out Podchaser Listen to The AI Fundamentalists PodcastCheck out MonitaurGuest Info:Follow Andrew on LinkedInFollow Sid on LinkedInSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Sep 19, 2023 • 55min

S2E28: "BigTech Privacy; Responsible AI; and Bias Bounties at DEF CON" with Jutta Williams (Reddit)

This week, I welcome Jutta Williams, Head of Privacy & Assurance at Reddit, Co-founder of Humane Intelligence and BiasBounty.ai, Privacy & Responsible AI Evangelist, and Startup Board Advisor. With a long history of accomplishments in privacy engineering, Jutta has a unique perspective on the growing field.In our conversation, we discuss her transition from security engineering to privacy engineering; how privacy cultures differ across social media companies where she's worked: Google, Facebook, Twitter, and now Reddit; the overlap of the privacy engineering & responsible AI; how her non-profit, Humane Intelligence, supports AI model owners; her experience launching the largest Generative AI Red Teaming challenge ever at DEF CON; and, how a curious knowledge-enhancing approach to privacy will create engagement and allow for fun. Topics Covered:How Jutta’s unique transition from security engineering landed her in the privacy engineering space. A comparison of privacy cultures across Google, Facebook, Twitter (now 'X'), and Reddit based on her privacy engineering experiences there.Two open Privacy Engineering roles at Reddit, and Jutta's advice for those wanting to transition from security engineering to privacy engineering.Whether Privacy Pros will be responsible for owning new regulatory obligations under the EU's Digital Services Act (DSA) & the Digital Markets Act (DMA); and the role of the Privacy Engineer when overlapping with Responsible AI issuesHumane Intelligence,  Jutta's 'side quest,' which she co-leads with Dr. Rumman Chowdhury, and supports AI model owners seeking 'Product Readiness Reviews' at scale.When, during the product development life cycle, companies should perform 'AI Readiness Reviews'How to de-biased at scale or whether attempting to do so is 'chasing windmills'Who should be hunting for biases in an AI Bias Bounty challengeDEF CON 31's AI Village's 'Generative AI Red Teaming Challenge,' which was a bias bounty that she co-designed; lessons learned; and what Jutta & team have planned for DEF CON 32 next yearWhy it's so important for people to 'love their side quests'Resources Mentioned:DEF CON Generative Red Team ChallengeHumane IntelligenceBias Buccaneers ChallengeGuest Info:Connect with Jutta on LinkedInSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Sep 12, 2023 • 44min

S2E27: "Automated Privacy Decisions: Usability vs. Lawfulness" with Simone Fischer-Hübner & Victor Morel

Today, I welcome Victor Morel, PhD and Simone Fischer-Hübner, PhD to discuss their recent paper, "Automating Privacy Decisions – where to draw the line?" and their proposed classification scheme. We dive into the complexity of automating privacy decisions and emphasize the importance of maintaining both compliance and usability (e.g., via user control and informed consent). Simone is a Professor of Computer Science at Karlstad University with over 30 years of privacy & security research experience. Victor is a post-doc researcher at Chalmers University's Security & Privacy Lab, focusing on privacy, data protection, and technology ethics.Together, they share their privacy decision-making classification scheme and research across two dimensions: (1) the type of privacy decisions: privacy permissions, privacy preference settings, consent to processing, or rejection to processing; and (2) the level of decision automation: manual, semi-automated, or fully-automated. Each type of privacy decision plays a critical role in users' ability to control the disclosure and processing of their personal data. They emphasize the significance of tailored recommendations to help users make informed decisions and discuss the potential of on-the-fly privacy decisions. We wrap up with organizations' approaches to achieving usable and transparent privacy across various technologies, including web, mobile, and IoT. Topics Covered:Why Simone & Victor focused their research on automating privacy decisions How GDPR & ePrivacy have shaped requirements for privacy automation toolsThe 'types' privacy decisions & associated 'levels of automation': privacy permissions, privacy preference settings, consent to processing, & rejection to processingThe 'levels of automation' for each privacy decision type: manual, semi-automated & fully-automated; and the pros / cons of automating each privacy decision typePreferences & concerns regarding IoT Trigger Action PlatformsWhy the only privacy decisions that you should 'fully automate' are the rejection of processing: i.e., revoking consent or opting outBest practices for achieving informed controlAutomation challenges across web, mobile, & IoTMozilla's automated cookie banner management & why it's problematic (i.e., unlawful)Resources Mentioned:"Automating Privacy Decisions – where to draw the line?"CyberSecIT at Chalmers University of Technology"Tapping into Privacy: A Study of User Preferences and Concerns on Trigger-Action Platforms"Consent O Matic browser extensionSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Sep 5, 2023 • 52min

S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)

This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers. We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs: the hallucination problem; the deliberation problem; the sleazy salesperson problem; & the problem of shared responsibilityTopics Covered:What motivated Reid to write his book, "Ethical Machines"The key differences between 'active privacy' & 'passive privacy'Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimizationThe importance of aligning privacy agendas with business prioritiesWhy what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred dataAutomated decision making: when it's necessary to have a 'human in the loop'Approaches for mitigating 'AI ethics fatigue'The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated valueThe differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethicsReid's article, "Generative AI-xiety," & the 4 main risks related to generative AIReid's advice for technical staff building products & services that leverage LLM'sResources Mentioned:Read the book, "Ethical Machines"Reid's podcast, Ethical MachinesGuest Info:Follow Reid on LinkedInSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Aug 29, 2023 • 50min

S2E25: "Anonymization & Deletion at Scale" with Engin Bozdag (Uber) & Stefano Bennati (HERE)

This week, we're chatting with Engin Bozdag, Senior Staff Privacy Architect at Uber, and Stefano Bennati, Privacy Engineer at HERE Technologies. Today, we explore their recent IWPE'23 talk, "Can Location Data Truly be Anonymized: a risk-based approach to location data anonymization" and discuss the technical & business challenges to obtain anonymization. We also discuss the role of Privacy Engineers, how to choose a career path, and the importance of embedding privacy into product development & DevPrivOps; collaborating with cross-functional teams; & staying up-to-date with emerging trends.Topics Covered:Common roadblocks privacy engineers face with anonymization techniques & how to overcome themHow to get budgets for anonymization tools; challenges with scaling & regulatory requirements & how to overcome themWhat it means to be a 'Privacy Engineer' today; good career paths; and necessary skill setsHow third-party data deletion tools can be integrated into a company's distributed architectureWhat Privacy Engineers should understand about vendor privacy requirements for LLMs before bringing them into their orgsThe need to monitor code changes in data or source code via code scanning; how HERE Technologies uses Privado to monitor the compliance of its products & data lineage; and how Privado detects new assets added to your inventory & any new API endpointsAdvice on how to deal with conflicts between engineering, legal & operations teams and hon how to get privacy issues fixed within an orgStrategies for addressing privacy issues within orgs, including collaboration, transparency, and continuous refinementResources Mentioned:IAPP Defining Privacy Engineering InfographicEU AI ActEthics Guidelines for Trustworthy AIPrivacy Engineering SuperheroesFTC Investigates OpenAI over Data Leak and ChatGPT’s InaccuracyGuest Info:Follow EnginFollow StefanoSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode