We’ve spent five parts building up the conceptual toolkit: what blockchains can’t do natively, how cryptographic proofs solve verification at scale, what zero knowledge adds, where ZK applications are emerging, and how zkVMs bridge the gap between developer code and proof systems. Along the way, a pattern emerged: most projects specialize. Rollup zkVMs optimize for transaction throughput. Privacy systems optimize for confidentiality. Bridge provers optimize for cross-chain finality.
But what if your applications don’t fit neatly into one category? What if you need sub-second proving for a DEX, bulk processing for reward distributions, and actual privacy preservation for identity verification, all from the same infrastructure?
This part looks at how Brevis approached that problem. Think of it as a case study in architectural decisions that diverge from how most of the ZK ecosystem developed.
Why Build for Heterogeneity?
Most zkVM projects began with a specific use case in mind. Build for rollups. Build for privacy. Build for bridges. Optimize aggressively for that domain, then potentially expand later. This approach makes a lot of sense. Focused optimization produces better performance for your target use case, and you can iterate from a working product rather than trying to boil the ocean.
Brevis took a different approach from the beginning. Rather than optimizing for a single use case and hoping to expand later, the architecture was designed to handle genuinely heterogeneous workloads from day one. The reasoning was straightforward: if ZK is going to become core infrastructure for blockchain applications, it needs to work across the full range of what applications actually need, not just the subset that fits a particular proving system’s sweet spot.
This wasn’t an obvious choice. Building for heterogeneity is harder than building for a single use case. You can’t optimize as aggressively for one thing when you need to handle many things. A system designed specifically for rollup execution would outperform a general-purpose system on rollup workloads. A system designed specifically for privacy would have deeper integration of zero-knowledge-specific optimizations.
The case for accepting those tradeoffs comes back to verifiable computation as a category. As we saw in Part 4, verifiable computation isn’t a single use case like rollups or bridges. It’s a broad and expanding domain. So infrastructure serving verifiable computation needs to handle whatever applications actually need, which means flexibility isn’t a compromise, it’s the core requirement.
Pico zkVM: The Glue-and-Coprocessor Architecture
At the core of Brevis’s approach is Pico, a modular zkVM designed around what’s called the “glue-and-coprocessor” architecture. The concept sounds technical, but the intuition is actually quite simple if you think about how modern computers work.
Your laptop’s CPU is a general-purpose processor that can execute any instructions, but it’s not particularly good at any specific task. For specialized operations like graphics rendering, you use a dedicated GPU that’s optimized for that specific workload. For machine learning inference, you might use a TPU or specialized accelerator. The CPU acts as the “glue” that coordinates everything, deciding what goes where and handling the general-purpose logic, while the specialized processors handle the heavy lifting in their domains.
Pico works similarly. There’s a minimal, high-performance core that can prove general-purpose programs written in standard languages like Rust. This core uses RISC-V as its instruction set, which means all the existing Rust compiler tooling works out of the box. You write normal Rust code, compile it to RISC-V, and Pico can prove that it executed correctly.
But here’s what’s different about Pico’s approach: the RISC-V core is deliberately minimal. It handles general-purpose logic well, things like control flow, data movement, the parts of programs that don’t fall into heavily-optimized categories. For everything else, Pico routes operations to specialized coprocessors.
These coprocessors come in two flavors.
Function-level coprocessors (often called precompiles) handle specific cryptographic operations that appear constantly in ZK applications. Think about what blockchain programs actually do: they hash data (every time they verify a transaction, check a merkle proof, or validate a signature), verify signatures, and perform elliptic curve arithmetic. In a pure RISC-V execution environment, these operations are expensive to prove. Every step of a Keccak-256 hash (Ethereum’s native hash function) becomes constraints that must be proven, and a single hash might translate to thousands of constraint rows. Multiply that across all the hashing a typical blockchain program does, and proving costs add up fast.
Pico’s precompiles replace this generic execution with dedicated circuits built specifically for common operations. When your program calls Keccak-256, instead of the zkVM proving thousands of RISC-V instructions step by step, it routes to a dedicated Keccak circuit that’s been optimized for ZK proving. The result is the same, but the proving cost drops dramatically.
The integration is designed to be invisible to developers. Pico maintains forked versions of common Rust cryptographic libraries (tiny-keccak for Keccak hashing, sha2 for SHA-256, curve25519-dalek for elliptic curve operations, secp256k1 for Bitcoin/Ethereum-style signatures, and others). These forks compile in a way that triggers the appropriate precompiles automatically. From the developer’s perspective, you just use standard Rust libraries. From the prover’s perspective, expensive operations get routed to efficient circuits without any manual intervention.
Application-level coprocessors go beyond individual operations to handle entire categories of computation. This is where Pico diverges more significantly from other zkVMs. Instead of optimizing a single cryptographic function, these coprocessors integrate arrays of specialized circuits that work together to tackle broader, domain-specific computational challenges.
The ZK Data Coprocessor, which we’ll discuss next, is the primary example. It’s not just accelerating one function; it’s specialized infrastructure for an entire category of computation that smart contracts can’t do on their own. By incorporating application-level coprocessors, Pico serves as the “glue” that routes data between high-efficiency modules while maintaining the flexibility of a general-purpose zkVM.
The modularity extends to proving backends as well. Remember from Part 3 and Part 5 that different proof systems offer different tradeoffs: SNARKs for small proofs and cheap verification, STARKs for transparency and no trusted setup, different prime fields for different performance characteristics. Rather than committing to a single proof system, Pico supports multiple backends that developers can select based on their requirements.
In practice, this means the same application logic can target different proving configurations. An application that prioritizes proving speed might use STARK-based proving optimized for fast generation. An application that needs the smallest possible on-chain proofs can wrap the result through a SNARK system like Groth16 (which produces some of the smallest proofs available, just a few hundred bytes) for minimal gas costs. The code stays the same; the proving pipeline adapts to the requirements.
The ZK Data Coprocessor
If Pico is the engine, the ZK Data Coprocessor is what makes the engine useful for a huge category of applications that need to work with blockchain data.
Remember the fundamental limitation we discussed way back in Part 1: smart contracts can’t efficiently access their own history. A contract can read its current state, but querying what happened in previous blocks, aggregating historical transactions, or computing analytics over time is either impossible or prohibitively expensive on-chain. The data is technically there on the blockchain, publicly available, but practically inaccessible to the contract itself.
This is a direct consequence of the verification model we discussed in Part 2. Every validator needs to verify every operation, which means every operation needs to be lightweight enough that hundreds of thousands of validators can perform it. Historical queries would require validators to maintain and traverse massive state archives for every verification, which simply doesn’t scale.
The ZK Data Coprocessor solves this by moving the data access and computation off-chain while maintaining on-chain verifiability. The pattern follows directly from what we covered about cryptographic proofs: one party does the expensive work, generates a succinct proof, and everyone else verifies the proof cheaply.
How it works
When an application needs historical data, say a user’s trading volume over 30 days, the process follows a straightforward flow:
- The coprocessor fetches the relevant historical data from the blockchain (transaction logs, state changes, balances over time)
- It performs the specified computation (aggregation, time-weighting, comparisons against thresholds) off chain
- It generates a ZK proof that the computation was performed correctly over the actual blockchain data
- The smart contract verifies the proof and receives the result
From the smart contract’s perspective, it receives verified data that it can trust as much as any other on-chain data. The proof guarantees that the off-chain computation actually used the correct historical data and performed the calculations correctly. There’s no oracle to trust, no centralized service that could manipulate results. The math verifies itself.
Why this differs from oracles
This trust model is worth emphasizing because it’s what distinguishes ZK coprocessors from oracles. With an oracle, you’re trusting the oracle operator to report accurate data. Economic incentives like staking and reputation make fraud expensive, but the security model is fundamentally trust-based. You’re relying on rational economic actors not to misbehave.
With a ZK coprocessor, the security is cryptographic. The proof system guarantees, with mathematical certainty rooted in the soundness property we discussed in Part 3, that the computation was performed correctly. The verifier (in this case, a smart contract) checks the proof and accepts or rejects the result based purely on cryptographic verification. No trust in the operator is required because the math itself provides the guarantee.
What this enables
The types of computations this unlocks are broad: time-weighted balance calculations that determine how long a user held tokens and in what amounts, trading volume aggregations that compute a user’s activity across thousands of transactions, position analytics that track lending and borrowing behavior over weeks or months, or cross-protocol activity that spans multiple contracts and even multiple chains.
These calculations would be impossible to perform on-chain, not because they’re conceptually difficult, but because the gas costs would be astronomical. Iterating through thousands of historical events and performing aggregations on each would cost more than any reasonable application could afford. The coprocessor makes them practical by moving the computation off-chain while keeping the verification on-chain where costs are manageable.
Other Coprocessors
The ZK Data Coprocessor is the most mature in the Brevis stack, but the glue-and-coprocessor architecture supports others. Each addresses a different category of computation that benefits from the same pattern: expensive work off-chain, cheap verification on-chain.
zkTLS enables verification of data from web2 sources. Traditional websites and APIs use TLS encryption for secure communication, but that encryption makes it impossible to prove what data you received without trusting you to report it honestly. A zkTLS coprocessor can prove that specific data came from a specific source, that an API returned a particular response, or that a website displayed certain information at a given time. This opens up use cases like proving your account status on a traditional platform, verifying off-chain credentials, or bringing web2 data on-chain with cryptographic guarantees rather than oracle trust assumptions.
zkML addresses machine learning inference. As AI becomes more prevalent in blockchain applications, there’s a growing need to prove that a specific model produced a specific output without revealing proprietary model weights or sensitive input data. A zkML coprocessor enables privacy-preserving inference where applications can verify AI outputs are genuine without exposing the underlying computation.
The modular design means new coprocessors can be integrated as new categories of verifiable computation emerge. The architecture isn’t limited to what exists today.
How the Stack Works Together
The power of Brevis’s approach comes from how Pico and our Coprocessors integrate into a unified system.
Consider what happens when PancakeSwap wants to offer trading fee discounts based on a user’s 30-day trading volume. The user initiates a swap. Before the swap executes, the DEX needs to know if this user qualifies for a VIP fee tier.
The ZK Data Coprocessor queries the user’s historical trades across the relevant pools. It aggregates the volume, compares against the tier thresholds, and determines the user’s current VIP status. Pico generates a proof that this determination is correct. The proof gets verified on-chain, and the smart contract applies the appropriate fee tier to the swap.
All of this happens fast enough that the user experiences it as a normal swap with a pleasant surprise: lower fees because the system recognized their trading history.
Now consider a completely different use case: Euler distributing $100,000 in lending rewards every four hours. The system needs to calculate time-weighted supply balances for thousands of addresses, determine each user’s share of the rewards, and enable trustless claiming.
The ZK Data Coprocessor processes the lending activity for all eligible addresses. Pico generates proofs for the reward calculations. Users can claim their rewards by verifying the proof on-chain. No centralized backend decides who gets what. No spreadsheet or database that users have to trust. The cryptographic proof is the source of truth.
Same infrastructure, same stack, completely different applications. One requires sub-second latency for individual users. The other processes bulk calculations for thousands of addresses in scheduled epochs. Both work because the architecture was designed for this kind of heterogeneity.
A Category of Its Own
Brevis’s implementation of the glue-and-coprocessor architecture has created something that doesn’t fit neatly into the categories we’ve covered throughout this series. Yes, verifiable computation is a broad domain within ZK applications. But even among other ZK coprocessor projects, there’s typically a narrower scope: one focuses primarily on historical Ethereum queries, another on cross-chain storage proofs between rollups, another on SQL-based data access for specific use cases. Each carved out a vertical and optimized for it.
Brevis stands alone as a ZK project that’s found a formula to address verifiable computation to its fullest extent. The tagline “Infinite Compute Layer” sounds like marketing, but it’s actually a reasonably accurate description of what the stack enables. The same infrastructure serves use cases across nearly every domain where ZK proofs provide value.
Intelligent DeFi is where protocols like PancakeSwap, Uniswap, MetaMask, Aave, Euler, QuickSwap, and others use Brevis to personalize user experiences based on historical behavior. Volume-based fee discounts, loyalty rewards, dynamic configurations. Features that were standard on centralized exchanges but impossible on-chain now run trustlessly.
Ecosystem Growth is where new blockchains launch sophisticated incentive programs with verifiable distribution. Linea’s billion-token Ignition program, TAC’s user acquisition campaigns, Units Network’s liquidity bootstrapping. Instead of opaque point systems and trust-me spreadsheets, participants can verify their rewards are calculated correctly.
Stablecoins and RWA is where protocols like Usual and OpenEden run continuous incentivization with transparent distribution. Daily rewards based on verified on-chain activity rather than snapshots or centralized databases.
L1 Scaling is where Pico Prism competes for integration into Ethereum’s core architecture. Real-time block proving that could shift consensus from network-wide re-execution to single-node proving with distributed verification. An entirely different domain from coprocessor applications, yet running on the same underlying zkVM.
Cross-chain Interoperability is where projects like Kernel use Brevis for cross-chain restaking and trustless bridging. ZK proofs enabling shared security across rollups and L1 blockchains.
Verifiable AI is where Kaito, Trusta, Kite AI, Pieverse, Vana, and others build AI applications with privacy-preserving personalization and provable model outputs. As AI moves on-chain, proving that specific models produced specific outputs becomes essential for trust.
Privacy is where projects like Automata Network use ZK for confidential data processing while maintaining verification integrity. The zero knowledge property used for its intended purpose.
Security is where GoPlus delivers ZK-verified security assessments and risk analysis for DeFi protocols. Trustless security data without relying on centralized reputation.
A few years ago, it would have been difficult to imagine a single ZK project with integrations across 30+ major protocols on 6 blockchains, generating over 250 million proofs. Despite our best attempts throughout this series to map the ZK landscape into clean categories, there isn’t one that fully captures the breadth and flexibility that Brevis’s stack has enabled. The closest description might simply be: infrastructure for verifiable computation, whatever form that computation takes.
Going Deep Where It Matters Most
We’ve established how Brevis’s architecture enables coverage across the full spectrum of verifiable computation. That alone is a significant undertaking, and you might reasonably expect it would prevent us from participating in the kind of specialized optimization that other projects focus on exclusively.
It doesn’t.
And the domain we chose to go deep on connects directly to why applications need ZK proofs in the first place. The root cause of the constraints we discussed back in Part 1 isn’t at the application layer. It’s in how blockchains validate transactions: hundreds of thousands of validators independently re-executing every operation in every block. The same “compute once, verify everywhere” logic that helps applications could help the blockchain itself.
Brevis built for that. The Pico architecture that powers coprocessor applications is flexible and powerful enough for block proving, and we’ve pushed it further with Pico Prism, optimized specifically for real-time Ethereum blocks. Part 7 covers how it works and what it achieved.

