The first machine learning project I ever worked on before starting my PhD was on this very topic, which we call model stealing attacks. So if you have a model that's sitting somewhere behind a cloud API, and you want this model to stay private, maybe because it uses proprietary information or you want to charge people for using it, then the fact that you let people query this model is inherently leaking information. And at some point with enough queries, you can probably reconstruct a local model that's kind of similar to the one that is supposed to be hidden. Whether this is feasible in practice depends a lot on the size of the model. For somewhat simple model sizes, this is usually not particularly expensive

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode