In some earlier papers, if I remember correctly, the Bonet paper and others was that they required the image to have public data to generate the proof. And so a lot of what you did was getting around that. One problem for both a tested image edits and also for machine learning is that you might want to hide some parts of the input. So in our work, we also introduced this for the ZKML space as well where you can compute a commitment in our case, a hash of the weights and reveal that. Because the commitment is binding, it forces the API provider to hash the weights,. then you can be assured that they're around the correct model.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode