The Shifting Privacy Left Podcast cover image

S2E10: Leveraging Synthetic Data and Privacy Guarantees with Lipika Ramaswamy (Gretel.ai)

The Shifting Privacy Left Podcast

00:00

Using Tokenization and Anonymization in Machine Learning?

If you train a machine learning model on your personal data, it could pretty much generate almost an identical output of your personal data. So why use synthetic data instead of other techniques like tokenization, anonymization, aggregation, and others? Because our data doesn't really live a nice solution to privacy issues. And so that's one way that it's open to vulnerability. There are tons of like really famous studies on this.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app