Brevis and Allora: Verifiable Inference for Decentralized AI

[read_meter]

TL;DR: Brevis is partnering with Allora to bring zero-knowledge verifiability to decentralized AI inference. The integration uses Brevis as a ZK oracle for inference and prediction settlement on-chain, and as a ZK host for Allora’s reputer loss function, the core mechanism that adjusts weights across worker models. Every weight adjustment becomes provable, every input becomes traceable, and the entire optimization loop runs without requiring trust in any single reputer.

Why this matters

Allora is a self-improving decentralized AI network. Workers submit inferences, forecasters predict the accuracy of those inferences, and reputers evaluate the results to feed losses back into a weighting function that determines which worker contributions matter most in the final network inference. The architecture rewards accuracy over time, and each epoch sharpens the weights.

That design depends on the integrity of the reputer layer. The losses reputers compute and the weights those losses produce decide which models speak loudest. If anyone can quietly manipulate that loop, the self-improvement guarantee breaks.

The standard fix is to grow the reputer set, leaning on replication and quorum to make manipulation expensive. That works, but it costs network resources, slows iteration, and never quite eliminates collusion risk. A more efficient fix is to make the reputer’s work cryptographically provable, removing the need to trust it at all. That is what this integration delivers.

What’s being built

The Allora x Brevis integration has two pillars.

ZK oracle for inference and prediction settlement.  Allora’s predictions need ground-truth data to resolve, like the actual BTC price at a given time, or the realized value of any other predicted variable. Brevis acts as the ZK oracle that verifies that settlement data, pulling it from on-chain sources through the ZK Data Coprocessor or off-chain sources through zkTLS. Each settled value carries a ZK proof that the underlying data came from an authentic, untampered source, giving smart contracts that consume Allora outputs (prediction markets, autonomous agents, on-chain trading systems) a cryptographic guarantee on the resolution side. The same verified data feeds into the second pillar, where the loss function uses it to score model predictions against reality and adjust weights accordingly. 

ZK host for the reputer loss function. The more novel half of the integration. Brevis runs a ZK version of Allora’s loss-function logic inside Pico zkVM, computing weight adjustments and producing a recursive proof that attests to both the input data and the execution itself. Allora’s contracts verify the proof and apply the new weights. No reputer requires trust because the math is {checkable}.

How the data flow works

The end-to-end pipeline runs in five steps:

  1. For each prediction window, the reputer gathers each worker model’s prediction together with the verified settlement value from the ZK oracle, pulled from on-chain sources through the ZK Data Coprocessor or off-chain sources through zkTLS. The settlement data carries a ZK proof of its authenticity.
  2. The reputer computes per-model inference losses by comparing each worker’s prediction against the verified settlement. Those losses become the inputs to the ZK version of Allora’s loss function hosted on Pico zkVM, which aggregates them into weight adjustments that optimize the cluster’s inference performance.
  3. Brevis generates a single recursive proof covering steps 1 and 2: the settlement data’s authenticity, the correctness of the per-model loss computations, and the correct execution of the loss function on those losses.
  4. The weight result and the proof are submitted on-chain.
  5. Allora’s smart contracts verify the proof and apply the weights to reassign worker model influence in the next epoch.

 The recursion is what makes this elegant. The proof generated at step 3 inherits the settlement data attestations from step 1 and the loss computations from step 2. One verification on-chain covers the entire optimization loop, from raw data to applied weights. 

What the integration unlocks for Allora

Allora’s reputer set no longer needs to grow to defend against manipulation. ZK already does that work, more rigorously than any quorum of reputers could. The team can keep the reputer layer lean and focus engineering effort on the inference and forecasting logic that actually drives accuracy gains.

. Builders integrating Allora’s outputs get a stronger guarantee on the parts of the network that affect their integration. The proof covers two things: the settlement data that resolves predictions, and the loss function that translates per-model accuracy into the weight adjustments shaping future cluster inference. The optimization loop around the models is what this integration makes verifiable. That loop has historically been the easier attack surface, and validator-set based consensus has had to fill the gap until now. That matters most in adversarial settings: prediction markets settling on Allora outputs, autonomous agents acting on Allora forecasts, financial primitives consuming Allora-derived signals. Verifiable inference optimization turns Allora into a primitive that can sit underneath higher-stakes use cases without forcing those use cases to inherit any validator trust assumptions. 

What this means for Brevis

This is one of the cleanest applications yet of Pico zkVM as a host for {general-purpose computation}. The loss function isn’t a historical data lookup or a simple aggregation, but a real statistical logic running on real data, and Pico proves the whole thing without forcing Allora to redesign its math for circuit-friendliness.

It also extends the verifiable computing thesis Brevis has been building across multiple fronts to include Allora’s inference optimization, which becomes verifiable through this integration. The underlying answer is the same one Brevis keeps reaching: verifiability is the property that lets these systems compose without dragging trust assumptions along with them. 

What’s next

The partnership is in the build phase. Brevis and Allora are aligning on the loss function spec and the recursive proof structure now, with integration milestones to follow. We will share more on the implementation and the launch timeline as the work progresses.

In the meantime, builders interested in verifiable inference optimization , ZK-secured AI primitives, or applications that consume Allora outputs are welcome to reach out. The combination of decentralized AI inference and zero-knowledge verifiability opens design space that neither side could cover alone.

About Brevis

Brevis is a verifiable computing platform powered by zero-knowledge proofs, serving as the infinite compute layer for Web3. Applications can offload expensive computations off-chain while proving every result on-chain. The Brevis stack includes Pico zkVM for general-purpose computation, the ZK Data Coprocessor for trustless access to historical blockchain data, Pico Prism for real-time Ethereum block proving (99.8% coverage on 16 GPUs, hitting the Ethereum Foundation’s $100K hardware target), Vera for ZK-proven media authenticity, and ProverNet, the decentralized marketplace for ZK proof generation now running on mainnet. To date, Brevis has generated 340M+ proofs across 50+ protocols on 8+ blockchains.

Dive Deeper into Brevis:
Website | X | Discord | Pico zkVM | ZK Data Coprocessor | Incentra | ProverNet

Interested in building with Brevis? Reach out to us to explore ideas!