Part 8: ProverNet: Coordinating the ZK Ecosystem

[read_meter]

By now we’ve painted a picture of a blossoming ZK ecosystem. More teams building proving technology every year, each exploring different architectures, proof systems, and optimization tradeoffs. Rollups pushing transaction throughput, bridges tackling cross-chain finality, coprocessors unlocking data access, and zkVMs making it all programmable. The supply side of verifiable computation has never been richer.

Meanwhile, more applications are realizing what this technology makes possible. DEXs offering personalized fees based on trading history, lending protocols distributing rewards with cryptographic fairness guarantees, and wallets letting users prove things about their activity without exposing it. Experiences that simply weren’t feasible before are now running in production.

We now have proving capacity expanding rapidly on one side, and an ever-growing pool of applications that could use it on the other. You’d think connecting them would be straightforward.

It isn’t.

Right now, applications that need proofs mostly get them through direct relationships, custom integrations with specific providers, and one-off arrangements. There’s no efficient way to match what applications need with what provers can deliver, no price discovery, no way for the best-fit provider to find the workloads they’re suited for.

What’s needed is an open market for proof generation. This final part explores how we’re building one, and why coordinating heterogeneous demand with an equally diverse proving landscape requires rethinking how such a market works.

Why a Market?

Before we get into the mechanics of how this works, let’s step back and ask a more fundamental question: why is a market the right approach here?

Applications could run their own proving infrastructure. Some do. But think back to what we covered in Part 5 about zkVMs. The whole point of that abstraction layer was to let developers write normal code without becoming cryptography experts. Asking those same developers to also operate GPU clusters, optimize proving pipelines, and manage hardware capacity defeats the purpose. Most applications want to focus on their product, not on running infrastructure.

Alternatively, a few large providers could try to handle everything. But think about what we’ve covered throughout this series. The workloads are wildly diverse. A DEX hook needs a proof in 2-3 seconds. A reward distribution processes hundreds of thousands of addresses over days. L1 block proving demands 99% coverage under 10 seconds on specialized GPU clusters. Privacy-preserving attestations have completely different cryptographic requirements.

No single provider, no matter how sophisticated, can optimally serve all of these. The hardware that excels at low-latency GPU proving underperforms on CPU-bound batch aggregation. The proof systems optimized for one type of computation leave performance on the table for others. Trying to be good at everything means being great at nothing.

So applications with specialized needs end up seeking out specialized providers, which leads right back to fragmentation: custom relationships, one-off integrations, no efficient way to discover who can actually serve your particular workload.

A market solves this by letting applications describe what they need while provers offer what they’re good at, with the matching happening automatically based on fit. Specialists can focus on what they do best and still find work, while applications get access to the right proving capacity efficiently.

But building a market for proof generation turns out to be harder than it sounds.

The Coordination Challenge

In a typical marketplace, goods are interchangeable. One kilowatt-hour of electricity is the same as another, one bushel of wheat is the same as another, and buyers care mostly about price and quantity. The market clears through simple supply and demand.

Proof generation doesn’t work like that.

Given the diversity of workloads we just described, a proof request is never just “generate a proof.” It’s “generate a proof of this specific type, meeting these security parameters, within this deadline, compatible with this verification environment, under this maximum proof size.” Each of those constraints matters. A marketplace that treats all proof requests as interchangeable will produce bad matches, pairing applications with provers who technically generate proofs but can’t actually meet the specific requirements.

The matching mechanism needs to understand these distinctions and route jobs to provers who can deliver what’s actually needed, not just whoever bids lowest.

Then there’s timing. Some proof requests are latency-critical, needing to clear in seconds, while others can wait hours or days. A DEX hook verifying eligibility before a swap executes can’t wait for an auction that takes hours to clear, which means the matching process itself needs to run fast enough to honor real-time constraints.

And there’s accountability. In a decentralized market, you can’t rely on reputation and relationships the way you can with direct integrations. If a prover accepts a job and fails to deliver, what recourse does the application have? The mechanism needs built-in guarantees, not just trust.

What a Proving Market Requires

So what would it take to build a market that actually works for proof generation?

Start with the basics. Applications submit proof requests specifying what they need: proof type, deadline, maximum fee, verification target, any other constraints. Provers report what they can offer: their capacity, their costs, what kinds of workloads they’re equipped to handle. The marketplace matches them based on fit, not just price.

But the matching mechanism needs specific properties to work reliably:

  • Truthful: the best strategy for both sides is to report their actual valuations and costs rather than gaming the system
  • Budget-balanced: fees collected from applications cover payments to provers without requiring external subsidies
  • Individually rational: no participant is forced into a losing trade
  • Computationally efficient: the matching algorithm runs fast enough to honor latency-sensitive deadlines
  • Heterogeneous by design: different proof types, hardware requirements, and deadline constraints are considered simultaneously

This is a non-trivial mechanism design problem, and standard auction formats weren’t built for goods this varied. But it’s solvable.

How the Brevis ProverNet Works

We’ve spent the past years operating proving infrastructure across 30+ protocol integrations on 6 blockchains, generating more than 250 million proofs. That experience taught us that no single proving setup optimally serves the full range of what applications need. The glue-and-coprocessor architecture we covered in Part 6 was built precisely because we recognized this diversity, and we’ve been doing coordination internally all along, matching workloads to different proving capacity whenever our partners submit requests.

ProverNet, first detailed in a Nov. 17th, 2025 whitepaper and blog, takes what we’ve learned and opens it up as a two-sided marketplace, with applications on one side, provers on the other, and an auction mechanism in the middle that handles the matching.

Each round works like this:

  1. Applications submit proof requests to Brevis Chain, a dedicated rollup built specifically for marketplace coordination. Each request specifies the proof type, deadline, maximum fee, and any other requirements.
  2. Provers report their available capacity and costs simultaneously, indicating what kinds of workloads they can handle and at what price.
  3. The bidding window closes and a matching algorithm computes the optimal allocation: which jobs go to which provers at what prices, considering proof types, prover capabilities, deadline constraints, and cost structures simultaneously.
  4. Provers generate and deliver the proofs, which get verified and stored. Applications fetch their completed proofs.
  5. Payment flows to the provers who delivered, coordinated through the BREV token, which serves as the payment medium for proving fees, the staking collateral that provers lock to participate, and the governance token for protocol parameters.

The auction mechanism we’ve created is called TODA (Truthful Online Double Auction). It’s designed specifically for heterogeneous goods, treating different proof types as distinct categories rather than assuming everything is interchangeable. The mechanism guarantees the properties we discussed: truthfulness, budget balance, individual rationality, computational efficiency.

Running the marketplace on its own rollup serves a specific purpose as well. If auction coordination ran on Ethereum mainnet or a congested L2, market throughput would compete with other transactions for block space. A dedicated rollup isolates the marketplace from external congestion while keeping everything transparent and permissionless. Proofs generated through ProverNet can target any destination chain.

That staking mechanism is what provides accountability. Provers stake BREV to participate, and if a prover accepts a job and fails to deliver a valid proof before the deadline, their stake gets slashed. Applications don’t have to trust that provers will deliver. The mechanism ensures it.

Where This Leads

We started this series with a simple observation: blockchains constrain what applications can do. Smart contracts can’t access their own history, can’t compute anything complex affordably, and can’t see what’s happening on other chains. Seven parts later, we’ve seen how zero-knowledge proofs dissolve those constraints one by one, and how the ecosystem building this technology has grown remarkably diverse in the process.

And that diversity is precisely what makes the ecosystem so capable. Every team optimizing a different proof system, every hardware configuration tuned for a different workload, and every application discovering new ways to use verifiable computation adds up to a landscape where the right capability exists for almost any problem. What’s been missing is the connective tissue that makes all of it discoverable and accessible.

Brevis CEO Michael has a prediction: “In ten years, 99% of computation for blockchain applications will happen off-chain, verified by ZK proof.” If that’s where we’re heading, and the trajectory of everything we’ve covered in this series suggests it is, then the coordination challenge we’ve described becomes foundational to the entire industry. 

ProverNet is how we’re building for that future. A marketplace where applications find the proving capacity they need, specialized teams find the workloads they’re built for, and the full diversity of the ZK ecosystem becomes unified, accessible, and efficient rather than fragmented.

This is where the series ends, but the real work is just beginning. Stay tuned for ProverNet mainnet beta launching soon.