Brevis x BNB Chain: Redefining Privacy Infrastructure for Web3

[read_meter]

TL;DR: Brevis and BNB Chain are expanding their collaboration into privacy infrastructure, building toward a generalized framework that goes far beyond first-generation transaction privacy. The three-dimensional design space we’re developing covers what gets protected, how it can be revealed, and who gets access. First concrete implementation: an Intelligent Privacy Pool built in collaboration with 0xbow where users prove compliance through ZK-verified on-chain behavior or exchange account status before transacting privately. The pool will launch on BNB Chain in Q1 2026.


Rethinking Privacy in Crypto

When most people hear “crypto privacy,” their minds go to a specific set of tools like Zcash, Tornado Cash, Railgun. These are systems designed to hide who sent what to whom.

These tools work and serve an important purpose, but they were built with first-generation zero-knowledge technology, which was computationally limited and could only handle simple operations. That meant payment privacy was about hiding transactions and not much else. 

You couldn’t easily control who uses the system, you couldn’t verify anything about a user’s history or status without breaking their privacy, and you couldn’t build in compliance pathways or configurable access rules. The technology simply wasn’t there yet.

Today it is. 

Modern zero-knowledge technology has expanded what’s actually possible, and the design space is now dramatically larger than “hide all token transfers.” Payment privacy can now evolve into something far more intelligent and configurable than before. And entirely new categories of privacy applications are also emerging. Understanding this expanded design space requires a new mental model.

Three Dimensions of Privacy

A better way to understand privacy systems is to think in three dimensions, and most interesting applications involve choices along all three.

Privacy target: what exactly is being protected?

Transaction counterparties and amounts are the traditional focus, but the target could also be:

  • User attributes: wallet history, exchange activity, reputation signals
  • Sensitive data: AI model weights, trading intent, preference profiles
  • Computation processes: algorithm logic, inference steps, scoring mechanisms

For example, a system might shield your wallet address while still proving properties about your on-chain history (such as that you’ve held a token for six months without revealing which wallet is yours). Another might hide the logic of a matching algorithm while making its outputs publicly verifiable, so traders can trust the results without seeing how they were computed.

Unmasking protocol: how can protected information be revealed?

This is where things get interesting from a security and trust-level perspective, because different systems make very different choices about who can access what’s hidden and under what conditions. Some designs ensure only the user can ever reveal the protected information, while others allow a centralized operator or a committee to unmask under defined governance conditions. Some build in pathways for regulatory authorities to compel disclosure, or use hardware attestation that reveals under specific technical constraints.

This dimension determines trust boundaries and shapes compliance posture in ways that matter enormously for real-world adoption.

Target users: who can use the privacy mechanism?

Some systems offer permissionless access to everyone, while others restrict based on criteria like KYC status, on-chain history, attestations, or membership in verified sets. You might need to prove something about yourself to gain access, which sounds paradoxical but makes sense when you think about it. You can prove you belong to a group of verified users without revealing which specific member you are.

Together, these three dimensions define a design space far richer than “transactions are either visible or hidden.

What This Unlocks

Once you start thinking about privacy in terms of these new dimensions, a much wider range of applications comes into view. While the first generation of privacy tools asked the single question of whether a transaction could be hidden, the expanded design space looks at what needs protecting, who should be able to reveal what information, and who should have access in the first place.

This shift in framing opens up use cases that would have seemed impossible under the old model. Here are a few examples we think represent where things are headed.

Private credential verification for social platforms. You might want to prove you’re a long-term holder or active DeFi user to boost your reputation on a platform, but linking your wallet publicly exposes your entire portfolio and history. With the right privacy design, you can verify credentials while keeping your wallet private.

Compliant private transactions gated by verified association. Traditional mixers have a reputation problem because anyone can enter, including bad actors. A better model uses ZK to gate access: users prove they belong to a trusted set before entering, for instance by demonstrating they’re long-term exchange users with strong trading history and clean behavioral profile.

Prediction markets on private algorithms. When a platform publishes sentiment or mindshare scores that determine market outcomes, how do traders know those calculations weren’t manipulated? ZK verification lets platforms prove every algorithm run was executed faithfully while keeping the methodology private.

Privacy-preserving data for AI training. AI is at the limits of public-domain data. ZK solves this by letting users compute summaries over their private data and publish only the results with a proof that verifies both the origin and correctness.

The common thread across all of these is that they combine choices along all three dimensions. What’s being protected varies. Who can unmask varies. Who gets access varies. But they’re all drawing from the same expanded design space that first-generation privacy tools couldn’t touch.

Toward a Generalized Privacy Framework

Look across those applications and a clear pattern emerges. They’re solving different problems for different users, but they’re drawing from the same underlying toolkit: attestations about user attributes, verifiable computation over private inputs, selective disclosure mechanisms. The primitives are shared even when the configurations differ.

This points toward shared infrastructure rather than bespoke systems: attestation registries where proofs get generated once and reused across applications, computational privacy frameworks for verifiable off-chain execution, and transactional privacy toolkits that make deploying compliant privacy pools more like configuration than research.

The three-dimensional framework maps directly onto what this infrastructure needs to provide. Targets, unmasking protocols, and access controls become the core primitives for a new infrastructure layer.

First Step: Intelligent Privacy Pool on BNB Chain

To demonstrate what this looks like in practice, Brevis and BNB Chain are collaborating with 0xbow to build an Intelligent Privacy Pool as the first concrete application.

The basic function builds on 0xbow’s Privacy Pools core functionalities: deposit assets, withdraw to a new address without an on-chain link between them. The pool is able to maintain an Association Set of deposits that meet compliance criteria. Only deposits in this set can be withdrawn privately. What makes it different is how this Association Set is defined and implemented.

Users prove eligibility of their deposits through one of two paths: on-chain provenance (proving funds originated from compliant sources via the Brevis ZK Data Coprocessor) or off-chain KYC binding (proving control of a verified exchange account, such as Binance, via zkTLS without revealing identity). Both paths use ZK proofs to verify eligibility without exposing sensitive data and relying on third-party trusts.

If a deposit is later flagged by sanctions or associated with malicious activity, it can be removed from the Association Set, blocking further withdrawals. This provides controlled unmasking for legitimate enforcement needs.

This is payment privacy, but implemented across all three dimensions: information privacy through proving attributes without revealing identity, transactional privacy through unlinkable deposits and withdrawals, and configurable access controls with removal pathways for edge cases. It shows what becomes possible when you apply the full design space to even the most familiar use case.

What Comes Next

The Intelligent Privacy Pool shows that the generalized framework works and that compliant privacy applications can be built using shared primitives.

The old framing of crypto privacy was limited by the technology available at the time. Now that limitation is gone. Privacy is a design space where what gets protected, how it can be revealed, and who gets access combine to enable entirely new categories of applications, and to make existing categories far more powerful.

Brevis, 0xbow, and BNB Chain are building toward that future together.