AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Use Data With AI and LLM Without Compromising Sensitive Data
I think there are like, you can do things three ways that I say. So for example, when you're training a model or even fine tuning a foundation model based on the use case, you want to redact it and put some standards in place. Or you can do one way pseudo anonymization, which is, you know, tokens - not LLM tokens we are talking about; but these are standards for Amruta. And then the third piece is also reverse reversible Pseudo-Anonymized Data (ROCAD), right? You basically pseudo anonymize the whole thing, you stand in tokens, send it, and then you re-identify it. It's