Every time you execute a transaction on Ethereum, something remarkable happens. Hundreds of thousands of computers around the world, operated by people who have never met and have no reason to trust each other, all run the exact same computation independently. They compare results. If everyone agrees, the transaction is valid. If anyone disagrees, something is wrong.
This is how Ethereum achieves trustlessness. No single party can lie about what happened because everyone else would catch them. But it’s also, when you think about it, extraordinarily wasteful. The same calculation, performed hundreds of thousands of times, just to make sure one person did it correctly.
This part explores what happens when we apply everything we’ve learned about ZK proofs to this fundamental problem. If proofs can replace trust, can they also replace redundant computation at the very core of how blockchains work?
The Logic of Redundant Verification
To understand what we’re trying to solve, we need to understand why Ethereum works this way in the first place.
Blockchains emerged from a specific problem: how do you maintain a shared record among parties who don’t trust each other? Traditional databases solve this by designating an authority. Someone owns the database, controls access, and everyone trusts them to maintain accurate records. But that trust creates a vulnerability. The authority can change records, deny access, or simply disappear.
Bitcoin’s breakthrough was eliminating the authority entirely. Instead of trusting a central party, every participant maintains their own complete copy of the record. When someone proposes a new transaction, everyone checks it independently against their own copy. If the transaction is valid according to the rules everyone agreed to, it gets added. If not, it’s rejected. No authority needed because everyone can verify for themselves.
This works beautifully for simple transfers. Checking whether Alice has enough coins to send to Bob is straightforward arithmetic. But Ethereum extended this model to arbitrary computation. Smart contracts can contain complex logic, interact with other contracts, and maintain state across transactions. When you execute a swap on a decentralized exchange, a lending position adjustment, or an NFT mint, that’s sophisticated code running on hundreds of thousands of computers simultaneously.
Every validator starts from the same state: the complete snapshot of all accounts, balances, and contract storage at the end of the previous block. They receive the proposed block containing new transactions. They execute each transaction in order, updating their local copy of the state. At the end, they compare their resulting state against what the block proposer claimed. If they match, the block is valid. If they don’t, the proposer tried something invalid and the block gets rejected.
This is why we call it “re-execution.” The block proposer already executed these transactions when building the block. Every validator re-executes them to verify the proposer didn’t cheat. The redundancy may seem like a flaw, but it’s actually the source of the security guarantee. You don’t have to trust the proposer because you can check their work. You don’t have to trust any particular validator because the whole network is checking.
But redundancy creates constraints. Gas limits exist because blocks must remain small enough that ordinary hardware can re-execute them within a 12-second slot. Increase the computational load per block and you start excluding validators who can’t keep up. Exclude enough validators and you’ve centralized the network around whoever can afford the fastest machines. The security guarantee depends on broad participation, and broad participation requires keeping the computational burden manageable.
From Re-Execution to Verification
The properties we covered in Part 3 point toward a different model entirely.
If you can prove that a computation was done correctly, verifiers don’t need to redo the computation. They just check the proof. And proof verification is fast, taking milliseconds regardless of how complex the original computation was. This asymmetry, expensive proving but cheap verification, is exactly what blockchains need.
The idea of applying this to Ethereum itself isn’t new. In December 2023, Vitalik Buterin published a blog post proposing an “enshrined zkEVM,” a zero-knowledge proving system integrated directly into Ethereum’s protocol. The core concept was elegant: instead of every validator re-executing blocks, a small number of provers would generate proofs that the blocks were executed correctly. Everyone else would verify those proofs.
This would transform the computational economics completely. One prover does the expensive work once. Everyone else verifies cheaply. Total network computation drops by orders of magnitude while security guarantees remain intact.
But there was a significant gap between the concept and reality. Generating ZK proofs for arbitrary EVM execution is computationally demanding. Early zkEVM implementations measured proving times in minutes or hours. A system that takes 30 minutes to prove a block obviously can’t replace real-time validation in a network that produces blocks every 12 seconds.
The Race to Real-Time
Over the next year and a half, the goal crystallized into a specific benchmark: proving blocks fast enough to fit within Ethereum’s slot time.
Initially, teams defined “real-time proving” loosely as generating proofs within 12 seconds, matching the slot time. But as work progressed, the definition tightened. In July 2025, the Ethereum Foundation published a formal roadmap that established precise engineering targets:
- Latency: Proofs must be generated in 10 seconds or less (with 12-second slots and ~1.5 seconds for network propagation, 10 seconds is the realistic window)
- Coverage: At least 99% of mainnet blocks must be provable within this window
- Hardware cost: Maximum capital expenditure of $100,000 for an on-premise proving setup
- Power consumption: Maximum 10 kilowatts
- Security: 128-bit security level (with 100 bits acceptable initially during deployment)
- Proof size: Under 300 kilobytes, with no trusted setups
- Code: Fully open source
These targets reveal how the Ethereum Foundation is explicitly designing for “home proving,” the idea that solo stakers who currently run validators from home should eventually be able to run provers too. The $100,000 hardware cap roughly matches what it costs to run a validator today (32 ETH stake plus equipment). The 10kW power limit corresponds to what a typical residential electrical circuit can handle, comparable to charging an electric vehicle.
The reasoning is straightforward: if real-time proving becomes practical but only for data centers, the network would centralize around whoever can afford industrial infrastructure. The security benefit of ZK verification would be undermined by the centralization of who can generate proofs. Home proving preserves the participation model that makes Ethereum’s security guarantees meaningful.
This triggered what Justin Drake, an Ethereum Foundation researcher, described as a “race to real-time.” Multiple zkVM teams began competing to hit the targets first, with the EthProofs dashboard providing transparent benchmarking for comparison.
Building Pico Prism
Proving arbitrary EVM execution is genuinely difficult. A single Ethereum block can contain hundreds of transactions, complex smart contract interactions, and thousands of cryptographic operations. Each of these must be translated into the constraint systems that ZK proofs operate over, and the resulting proof must be generated fast enough to matter.
The architecture we covered in Part 6, the glue-and-coprocessor model, was designed with general-purpose verifiable computation in mind. Pico handles diverse workloads because it routes specialized operations to dedicated circuits while a minimal RISC-V core manages general logic. But Ethereum block proving represents a different kind of challenge: not diversity of workloads, but a single workload at extreme scale and speed requirements.
Pico Prism emerged from optimizing the Pico architecture specifically for this use case.
The core innovation is distributed proving. Rather than trying to generate proofs on a single machine, no matter how powerful, Pico Prism decomposes the proving work across a cluster of GPUs. The architecture breaks proving into parallel phases, with computation-heavy workloads distributed across multiple machines while coordination happens efficiently between them.
This sounds simple in principle, but the engineering challenges are substantial. Proof generation has inherent dependencies; you can’t just split a block into pieces and prove them independently. Pico Prism’s distributed architecture carefully manages these dependencies to achieve near-linear scaling as machines are added to the cluster. Double the GPUs, nearly halve the proving time.
Where We Are Now
When Pico Prism launched in October 2025, it achieved the following results on Ethereum mainnet blocks with 45 million gas limits:
- 99.6% of blocks proven in under 12 seconds
- 96.8% of blocks proven in under 10 seconds
- 6.9 seconds average proving time
- 64 RTX 5090 GPUs (consumer-grade hardware)
- ~$128,000 hardware cost
For context, when Pico Prism launched, the previous state-of-the-art zkVM had achieved 40.9% of blocks provable under 10 seconds using 160 RTX 4090 GPUs (roughly $256K in hardware). That benchmark used 36M gas blocks. Ethereum has since raised the gas limit to 45M, and is raising it once again to 60M with the Fusaka upgrade (likely live by the time you read this). On those same 36M gas benchmark blocks, Pico Prism hit 98.9% coverage while cutting hardware costs in half and reducing average proving time from 10.3 seconds to 6.04 seconds.
Justin Drake noted the progress in an October 2025 post: “In May, SP1 Hypercube proved 94% of L1 blocks in under 12 seconds using 160 RTX 4090s. Five months later Pico Prism proves 99.9% of the same blocks in under 12 seconds, with just 64 RTX 5090s. Average proving latency is now 6.9 seconds. Performance has outpaced Moore’s law ever since Zcash pioneered practical SNARKs a decade ago.”
The takeaway here isn’t the specific percentages. It’s that real-time proving works. You can buy the hardware, run the software, and prove Ethereum blocks fast enough to matter. Two years ago this was a research goal. Now it’s an engineering reality approaching the Ethereum Foundation’s targets.
Competition as Coordination
Pico Prism isn’t the only team working on this. Drake’s same post counted at least nine zkVMs actively racing toward real-time proving.
This competition is intentional, as the Ethereum Foundation doesn’t want the network to depend on a single prover, just as it doesn’t want dependence on a single execution client. Multiple independent implementations provide redundancy against bugs, vulnerabilities, and operational failures. If one prover goes offline, others can fill the gap. If one implementation has a subtle bug, the others catch it.
The EthProofs dashboard, launched by the Ethereum Foundation, tracks the handful of zkVMs capable of proving Ethereum blocks at production scale, and Pico is among them. The transparent benchmarking accelerated progress across the ecosystem by giving teams clear targets and immediate feedback.
We’re continuing to optimize Pico Prism with a roadmap to achieve 99% real-time proving on fewer than 16 RTX 5090 GPUs in coming months. The goal is to hit all of the Ethereum Foundation’s targets, including the sub-$100K hardware cost and sub-10kW power consumption that would make home proving practical, and then keep pushing the numbers lower from there. The entire ecosystem is iterating toward thresholds that seemed impossible two years ago, and we intend to stay at the front of that race.
What Real-Time Proving Unlocks
Assume, for the moment, that real-time proving becomes standard. What actually changes?
Start with validators. Today, running a validator requires hardware capable of re-executing every transaction in every block within the 12-second slot time. As Ethereum’s gas limit increases (recently rising again from 45 million to 60 million, with plans for 100 million and beyond), so do the hardware requirements for validators. The network constantly balances throughput against the risk of excluding participants who can’t keep up.
With real-time proving, validators no longer execute blocks. They verify proofs. Proof verification takes milliseconds regardless of how complex the underlying computation was. A phone could verify an Ethereum block as easily as a high-end server. The asymmetry between proving and verification, which we discussed back in Part 2, finally gets applied to the base layer itself.
This has cascading effects throughout the system.
Gas limits can increase dramatically. If validators aren’t re-executing blocks, there’s no computational ceiling on how much a block can contain. The limiting factors become prover capacity and data availability, both of which can scale more gracefully than forcing every validator to perform redundant computation.
Light clients become genuinely light. Today, a “light client” on Ethereum still needs to trust someone or perform significant verification work. With ZK proofs, a light client just downloads and verifies the proof. You could eventually run a fully trustless Ethereum client on a mobile device that verifies blocks in real-time without downloading the full chain or trusting any third party. The vision of “validate the chain from your phone” becomes realistic.
The hardware barrier for participation drops. Home stakers currently need machines capable of keeping up with block execution. With proof verification, much simpler hardware becomes sufficient. This pushes against the centralization pressures that come from increasing throughput.
Layer 2 rollups benefit indirectly. When the base layer can handle more computation and data, L2s can settle more data to L1, improving their security properties and reducing costs. We’re talking potential improvements of 100x, eventually even 1000x. The entire stack gets a boost.
The implications extend beyond pure scaling. Faster finality becomes possible because proofs can be checked instantly rather than waiting for execution confirmation. Cross-chain communication becomes more trustless because chains can verify each other’s state transitions through proofs rather than relying on validators or bridges. The infrastructure-level improvements ripple outward.
Real-Time Proving Is Only Part of the Story
It’s worth stepping back to situate what we’ve covered.
Pico Prism represents one capability within a broader stack. Part 6 established that Brevis’s architecture enables coverage across the full spectrum of verifiable computation: intelligent DeFi, ecosystem growth programs, stablecoins, cross-chain interoperability, verifiable AI, privacy, security. L1 block proving is one more domain, not a replacement for the others.
This important to note as real-time proving for Ethereum, as transformative as it is, doesn’t solve everything. Even with dramatically increased gas limits, even with proof-based verification, the base layer remains a shared resource. Transactions still compete for block space. Complex computations still cost gas. Historical data access, privacy-preserving computation, cross-chain verification, and the other workloads we’ve discussed throughout this series still often need their own proving infrastructure.
The constraints from Part 1 don’t disappear, they shift. Instead of being bottlenecked by validator re-execution, the network becomes bottlenecked by prover capacity and data availability. These are better problems to have, but they’re still problems.
And this is next part is where the series will conclude.
Real-time proving addresses the verification bottleneck for Ethereum’s base layer. But applications will always have workloads that don’t fit neatly into block execution. The heterogeneous demands we’ve traced through seven parts, the diversity of use cases and optimization requirements, the specialization that characterizes the ZK ecosystem, don’t go away just because L1 validation gets faster.
If heterogeneous workloads need heterogeneous provers, and if specialized optimization consistently outperforms general-purpose approaches, how do you coordinate all of that at ecosystem scale? How do you match applications with the provers best suited to their needs? How do you create markets for proving capacity that serve the full range of what verifiable computation requires?
Part 8 explores what comes next: an open marketplace for proof generation where different provers compete for different workloads, and applications can find the right proving infrastructure for what they actually need.

