The Shifting Privacy Left Podcast cover image

S2E10: Leveraging Synthetic Data and Privacy Guarantees with Lipika Ramaswamy (Gretel.ai)

The Shifting Privacy Left Podcast

CHAPTER

Using Tokenization and Anonymization in Machine Learning?

If you train a machine learning model on your personal data, it could pretty much generate almost an identical output of your personal data. So why use synthetic data instead of other techniques like tokenization, anonymization, aggregation, and others? Because our data doesn't really live a nice solution to privacy issues. And so that's one way that it's open to vulnerability. There are tons of like really famous studies on this.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner