• AtHeartEngineerOP
    link
    English
    55 days ago

    The model that is doing the inference is committed to before hand (it’s hashed) so you can’t lie about what model produced the inference. That is how ezkl, the underlying library, works.

    I know a lot of people in this cryptography space, and there are definitely scammers across the general “crypto space”, but in the actual cryptography space most people are driven by curiosity or ideology.

    • @[email protected]
      link
      fedilink
      English
      45 days ago

      I appreciate the reply! And I’m sure I’m missing something, but… Why can’t you just lie about the model you used?

      • AtHeartEngineerOP
        link
        English
        25 days ago

        Ahh, ya, so this is a deep rabbit hole but I will try to explain best I can.

        Zero knowledge is a cryptographic way of proving that some computation was done correctly. This allows you to “hide” some inputs if you want.

        In the context of the “ezkl” library, this allows someone to train a model and publicly commit to it by posting a hash of the model somewhere, and someone else can run inference on that model, and what comes out is the hash of the model and the output of the inference along with a cryptographic “proof” that anyone can verify that the computation was indeed done with that model and the result was correct, but the person running the inference could hide the input.

        Or let’s say you have a competition for whoever can train the best classifier for some specific task. I could train a model and when I run it the test set inputs could be public, and I could “hide” the model but the zk computation would still reveal the hash of the model. So let’s say I won this competition, I could at the end reveal the model that I tried, and anyone would be able to check that the model I revealed and the model that was ran that beat everyone else was in fact the same model.