This is a guest post written by Inference Labs. You can see their version of the post here.
From Web3 and Web2 platforms to traditional brick-and-mortar businesses, every domain we navigate is shaped by rigorously engineered incentive systems that structure trust, value, and participation. Now player 2 has entered the chat — AI Agents. As they join, how do we ensure open and fair participation for all? From “Truth Terminal” to emerging AI Finance (AiFi) systems, the core solution lies in implementing robust verification primitives.
AI is a magical black box that performs a bunch of actions to produce an output. We can’t trust what a developer says the black box does inside without it being completely open source (including weights).
This is a concept for a system where the actions performed can be proved to those who don’t have visibility inside the box to trust the box is doing what it is saying it’s doing.
An AI enemy that can prove it isn’t cheating by providing proof of the actions it took. In theory.
Zero Knowledge Proofs make a lot of sense for cryptography but in a more abstracted sense like this, it still relies on a lot of trust that the implementation generates proofs for all actions.
Whenever I see Web3, I personally lose any faith in whatever is being presented or proposed. To me, blockchain is an impressive solution to no real problem (except perhaps border control / customs).
Zk in this context allows someone to be able to thoroughly test a model and publish the results with proof that the same model was used.
Blockchain for zk-ml is actually a great use case for 2 reasons:
it’s a public immutable database where people can commit to the hash of some model they want to hide.
It allows someone with a “model” (that doesn’t have to be a neural net, it could be some statistical computation) and verifier to do work for others for a fee. Let’s say I have a huge data set of property values/data for some given area, and I’m a real estate agent, and I want to have other people run some crazy computation on it to predict which houses will likely sell first in the next 30 days. I could post this challenge online with the data, other people could run models against that data and post their results (but not how they got them) on chain. In 30 days the real estate agent could publish the updated data and reward the best performer, and potentially “buy” their model. You could do this with a centralized service, but they would likely take a fee, keep things proprietary, and likely try to make some shady back room deals. This removes the middleman.
The way AI is trained today creates a black box solution, the author says only the developers of the model know what goes on inside the black box.
This is major pain point in AI, where we are trying to understand it so we can make it better and more reliable. The author mentions that unless AI companies open source their work, it’s impossible for everyone else to ‘debug’ the circuit.
Zero knowledge proofs are how they are trying to combat this, using mathematical algorithms they are trying to verify the output of an AI model in real time, without having to know the underlying intellectual property.
This could be used to train AI further and increase the reliability of AI drastically, so it could be used to make more important decisions and adhere much more easily to the strategies for which they are deployed.
Hey can someone dumb down the dumbed down explanation for me please?
AI is a magical black box that performs a bunch of actions to produce an output. We can’t trust what a developer says the black box does inside without it being completely open source (including weights).
This is a concept for a system where the actions performed can be proved to those who don’t have visibility inside the box to trust the box is doing what it is saying it’s doing.
An AI enemy that can prove it isn’t cheating by providing proof of the actions it took. In theory.
Zero Knowledge Proofs make a lot of sense for cryptography but in a more abstracted sense like this, it still relies on a lot of trust that the implementation generates proofs for all actions.
Whenever I see Web3, I personally lose any faith in whatever is being presented or proposed. To me, blockchain is an impressive solution to no real problem (except perhaps border control / customs).
Zk in this context allows someone to be able to thoroughly test a model and publish the results with proof that the same model was used.
Blockchain for zk-ml is actually a great use case for 2 reasons:
The way AI is trained today creates a black box solution, the author says only the developers of the model know what goes on inside the black box.
This is major pain point in AI, where we are trying to understand it so we can make it better and more reliable. The author mentions that unless AI companies open source their work, it’s impossible for everyone else to ‘debug’ the circuit.
Zero knowledge proofs are how they are trying to combat this, using mathematical algorithms they are trying to verify the output of an AI model in real time, without having to know the underlying intellectual property.
This could be used to train AI further and increase the reliability of AI drastically, so it could be used to make more important decisions and adhere much more easily to the strategies for which they are deployed.
Thanks for the ‘for dummies’ explanation.