Introducing Pico: A Modular and Performant zkVM

[read_meter]

The Brevis team is thrilled to announce Pico, a zero-knowledge virtual machine (zkVM) offering unparalleled modularity and high efficiency. Pico empowers developers to optimize performance and user experience by constructing a zkVM in a modular lego-like manner to best fit the application’s computation characteristics. Developers can easily choose among a plethora of built-in options or add completely customized solutions for proving backends and machine instances, assembling specialized workflows to meet their unique requirements. 

Pico also takes the “glue-and-coprocessor” architecture to a whole new level: it not only supports low-level coprocessors, such as precompiles to accelerate specific VM instruction-level operations, but also enshrines app-level ZK infrastructure like Brevis’s on-chain ZK Data Coprocessor. This boosts the performance of applications that utilize historical blockchain data by 32X. 

Pico is also highly performant. Though we do not have a fully functional GPU accelerated clustering solution yet, Pico sets a new industry benchmark by achieving the world’s fastest performance on CPU—running 70% to 155% faster than the second-best solution among RISC0, SP1 and OpenVM. 

With the launch of Pico v1.0, we’ve introduced the world’s first zkVM that allows developers to prove programs with customizable:

  • Proving backends: STARK on KoalaBear and BabyBear as well as CircleSTARK on Mersenne 31.
  • Proving workflows: Optimize security, scalability, and proof generation targets to suit your app’s unique needs.
  • Access to historical on-chain data: By integrating with the enshrined on-chain ZK Data Coprocessor, developers are able to build dApps that can trust-freely access and compute on historical on-chain data with the best performance and programmability.

Pico is RISC-V compatible and supports Rust programming toolchains. Thanks to its modular architecture, Pico is future-proof and can easily support new innovations in ZK theoretical research. Whether you’re building a zk-powered next-gen dApp or exploring zero-knowledge systems, Pico provides the tools to drive innovation with confidence.

To start, check out the Pico Guide, explore our GitHub repo, and join the conversation in our developer TG group or Discord to contribute and stay up to date on the latest news.

Why Pico? 

Brevis’s initial product—the on-chain ZK Data Coprocessor—has been widely adopted in DeFi and beyond. Some of our partners have already launched Brevis-powered features on mainnet, including Kwenta, Usual, Algebra Labs, JoJo Exchange, and Trusta. Many other top protocols and dApps, such as PancakeSwap, Celer, Gamma, Quickswap, Frax, Mask Network, Kernel, BeraBorrow, Thena, Kim Protocol, 0G, Bedrock, Mellow Finance, ZettaBlock, Hemera, and Mendi Finance, are building Brevis-powered next-generation products and features. 

Challenges Uncovered

Our collaborations with these forward-thinking partners uncovered increasingly diverse and demanding requirements in customizable proving workflows. In particular:

  1. Meeting Diverse Application Needs
    Different projects often have unique priorities in performances and complexities in business logic. A rigid, “one-size-fits-all” approach—whether it’s a monolithic zkVM or a fixed set of ZK circuits—makes it difficult to accommodate these varied needs. Systems that lack the flexibility to customize proving workflows or to integrate custom circuits (at either the opcode or application level) end up being inherently limited.
  2. Adopting New Cutting-Edge ZK Technologies
    The ZK space is evolving rapidly, with new proof backends, frameworks, and cryptographic breakthroughs emerging every few months. Systems that lack a modular architecture often struggle to adopt these new innovations—leading to outdated proofs, suboptimal performance, and missed opportunities for advanced optimizations.

From the developer’s perspective, the bottom line is clear: we need a zkVM that can pivot quickly to new cryptographic advancements and adapt seamlessly to the unique complexities of each application.

Pico is the Modular Solution, for Real

Pico was created precisely to tackle these challenges. Pico is designed to be modular and flexible with its unique architecture, enabling:

  • Multiple Proving Backends: Swap or upgrade to the latest proving backends without losing existing functionality.
  • Customizable Workflows: Tailor the proof generation pipeline to meet application-specific needs.
  • Coprocessor Integration: Build or integrate specialized coprocessors for your application without having to hack around a one-size-fits-all architecture.
Glue-and-Coprocessor Architecture Beyond Precompiles 

To build a “glue-and-coprocessor” architecture, Pico combines high performance through specialized circuits (the “coprocessors”) with broad applicability via a general-purpose zkVM foundation (the “glue”). Coprocessors are customized circuits designed to accelerate intensive tasks such as arithmetic, cryptography, or machine learning, significantly boosting overall ZK proving efficiency. Meanwhile, the general-purpose zkVM–the glue–acts as the scaffold for ZK proofs, orchestrating the overall proving and verification process. The glue ensures that any logic not handled by the optimized and specialized circuits can still be securely proven within a universal framework.

Combined, the glue-and-coprocessor architecture achieves faster proof generation than traditional zkVMs while maintaining greater programmability than special-purpose-only coprocessors.

“Precompiles” are one common category of coprocessors. They are specialized circuits extending the RISC-V instruction set to accelerate common low-level operations such as hash or signature verification. Pico supports developers in building precompiles fitting their unique needs with many frequently used precompiles supported out-of-the-box. 

While low-level precompiles have their place in the glue-and-coprocessor architecture, they alone are not enough to solve all of the important performance requirements of various applications. Let’s say a developer wants to generate a proof to attest that a trader has made 10,000 trades on Uniswap in the last 30 days amounting to $50M volume. Using a zkVM alone, they would need to write a Merkle inclusion proof program and an RLP parsing program in Rust. Now, even if the zkVM is optimized with precompiles such as SHA to accelerate key operations in the above programs, it is fundamentally blind to the high-level application intention and as a result, will inevitably have worse performance due to sub-optimal chunking, parallelization and proving pipeline constructs. 

To solve this challenge, we need to empower zkVM with app-level coprocessors that can undertake a larger chunk of the proof computation with both optimized ZK circuits and app-aware systems that organize these specialized circuits effectively.  

Pico realizes this vision by integrating Brevis’s on-chain ZK Data Coprocessor as an enshrined app-level coprocessor. Developers building applications that access and compute historical on-chain data can now enjoy the best of both worlds. The built-in on-chain ZK Data Coprocessor delivers ultra-high performance for accessing and proving data validity—powered by a horizontally scalable parallel proving system, not just specialized circuits. At the same time, they’ll have ultimate flexibility to express arbitrarily complex computation logics on top of the data retrieved in plain Rust.

To showcase the advantages of our innovation, we compared the performance of a “vanilla” Pico VM with a “boosted” Pico VM enshrined with Brevis’s on-chain ZK Data Coprocessor. This comparison evaluates the efficiency of generating ZK proofs for total trade volume across 4096 Uniswap trades.

As shown in Table 1, the Pico VM with an enshrined coprocessor delivers over 32× the performance improvement while costing just 33% of vanilla Pico’s expense. Note that in the performance breakdown of the coprocessor-boosted Pico, the performance bottleneck is on the VM computation logic. This means that even if the vanilla zkVM achieves 10X the performance gain through methods like GPU acceleration, the performance gap is still going to be significant.

Table 1
Performance Comparison of Coprocessor-Boosted Pico vs. Vanilla Pico (4096 Trades, Transaction Receipt Event Log Size: 40)

Coprocessor-boosted PicoVanilla Pico
Data Access Time177s/
Aggregation Time21s/
Data Computation Time290s/
Total End-to-end Time311s10,112s

Even though the coprocessor-boosted Pico is still more than 5X slower than the specialized Brevis on-chain ZK Data Coprocessor, we believe this offers a perfect tradeoff between performance and programmability that is not available on the market today.

This general design pattern does not only apply to on-chain data access and compute use cases: Pico is now integrating verifiable AI inference and Reth app-level coprocessors to significantly accelerate proof generation for those cases as well. 

By combining the glue and coprocessors, Pico offers developers a powerful and flexible tool to balance performance, programmability, and adaptability in ZK-powered applications.

Flexible Proving Backends

To maximize performance, specialized circuits for different dApp features often require the most advanced proving systems on certain prime fields. Pico is dedicated to incorporating the latest advancements in zero-knowledge technology by supporting multiple prime fields, including BabyBear, KoalaBear, and Mersenne31, as well as various proving systems like STARK and CircleSTARK. 

An illustrative example is the proving of a widely-used recursion-friendly hash function called Poseidon2, which is extensively used in recursive proving in zkVMs. Even with the same STARK, proving on the KoalaBear field is much more efficient than the BabyBear field due to the field’s unique properties. Thus when a program incurs a large quantity of Poseidon2 proving, a considerable performance gain can be achieved by simply switching to KoalaBear without touching existing proving logic.

With Pico’s ability to support a multitude of proving backends, developers are able to immediately benefit from performance improvements by experimenting with different configurations, and enjoy seamless upgrades as new proving backends are continuously integrated into Pico.

Customizable Proving Workflows

The way proofs are generated can significantly impact scalability, cost, and latency. Pico offers distinct pathways for customizing proving processes at different levels:

  1. Instance for Customizing Single Machine Proving Process

Each VM instance is highly composable to meet app-specific needs. Developers have the flexibility to customize:

  • Configurations for proving backends;
  • Chips to keep circuits lean or supercharged for domain-specific workloads;
  • Proving processes to balance speed, memory usage, or proof size;

In fact, changing the entire proving backends only requires a simple change in instance configurations. Developers can use Pico to prototype quickly and then fine tune the performance based on their individual workload with zero code changes needed. 

  1. ProverChain for Composing Instance Proving Workflow

Instances could be chained together to compose a proving workflow. The default proving workflow we’ve constructed for Pico is by chaining instances as RISCV → CONVERT → COMBINE → COMPRESS → EMBED → ONCHAIN. With this workflow, the RISC-V program is first chunked and proved into a list of proofs, and then each proof is converted into a recursion proof. These recursion proofs are further combined into a single recursion proof, compressed and embedded into the BN254 field, and finally transformed into an on-chain verifiable proof. 

Although a default proving workflow is provided, developers are able to easily add, adjust, or remove instances in order to tailor the workflow for specific applications. As an example, developers can opt to gain better proving efficiency by slightly increasing the proof sizes, or remove the last ONCHAIN recursion step entirely if they don’t want to verify their proofs on EVM-based blockchains. 

Setting a New State-of-the-Art Performance Benchmark

In our initial benchmarking effort, Pico consistently outperforms all existing zkVM solutions on CPUs. We compared Pico against the latest releases of RISC0, SP1, and OpenVM (as of February 6, 2025), each of which supports recursive proving, thus allowing for proving arbitrarily large workloads.

We ran the benchmarks on the same CPU machine, an AWS r7a.48xlarge instance with 192 CPU cores and 1.5 TB of RAM, allowing all zkVMs to generate proofs up to the final STARK proof before it is transformed into a SNARK. We evaluated both the commonly used Fibonacci workload as well as two real-world scenarios: Tendermint and a Reth block #17106222. In every case, Pico demonstrated significant speed-ups—achieving up to 155% faster performance than the second-best solution and setting a new standard as the most performant state-of-the-art zkVM.

Table 2
Performance Benchmarks of RISC0, OpenVM, SP1 and Pico on AWS r7a.48xlarge (192 Cores, 1.5TB RAM) Across Fibonacci, Tendermint, and Reth-block 171 Tasks

TaskVMVersionRuntime (s)Acceleration (%)
FibonacciRISC0v1.2.28989.36
OpenVMv1.0.0-rc.08070.21
SP1v4.0.18172.34
Picov1.0470
TendermintRISC0v1.2.22,7222,595.05
OpenVMv1.0.0-rc.05,1885,036.63
SP1v4.0.1258155.45
Picov1.01010
Reth-block 171RISC0v1.2.216,0081,361.92
OpenVMv1.0.0-rc.02,01984.38
SP1v4.0.12,644141.46
Picov1.01,0950

* To ensure a fair comparison in the Reth benchmark, we adapted RISC0, SP1, OpenVM, and Pico to prove the exact same Reth-171 program (all adapted from here), using each framework to compile their own respective programs.

With our GPU-accelerated cluster version of Pico soon to be released, we want to emphasize that the CPU performance results and gains demonstrated here will largely translate to GPU-based computation as well. We’ll publish our GPU benchmarks once our accelerator is ready in the near future.

On the Shoulders of Giants

Pico draws inspiration from the following projects, each representing cutting-edge advancements in zero-knowledge proof systems. By building upon their innovations, Pico delivers a modular and performant zkVM:

  • Plonky3: Pico’s proving backend is based on Plonky3, extending its modularity to the zkVM layer to enable the flexible selection of proving fields and systems that best fit each use case.
  • SP1: Pico derives significant inspiration from SP1’s chip design and their constraints. Its recursion compiler and precompiles originate from SP1.
  • Valida: Pico’s implementation of cross-table lookups is inspired by Valida’s pioneering work in this area.
  • RISC0: Pico’s Rust toolchain is based on the one originally developed by RISC0.

Build with Us

At Brevis, we believe the future of zero-knowledge technology lies in collaboration and innovation. Pico is more than a zkVM—it’s a platform for creating the next generation of zk-powered applications. We invite developers, researchers, and innovators to join us in shaping this future. Dive into our comprehensive Pico Guide to get started, explore the modular architecture and codebase on our GitHub repo, and connect with like-minded builders in our Telegram community or Discord. Your contributions and insights are key to unlocking new possibilities. Together, let’s push the boundaries of what zero-knowledge computation can achieve and build an intelligent and trustless decentralized world!