Can We Formally Verify Privacy Properties?

Motivations

We're interested in finding subtle bugs in Tornado Cash-like applications, Semaphore, and Privacy Pools. For example, what if a developer accidentally asks a user to input their deposit index into the Solidity withdrawal function? This is plausible when developers are building quickly. Can we automatically detect this without relying on manual checks?

Automation could be even more valuable when designing complex privacy applications, such as the old Unirep.

Recent Advances in Formal Verification

Recent advances allow us to handle completeness and soundness by reasoning about program inputs and outputs.

github.com/zksecurity/evm-asm/blob/main/EvmAsm/Evm64/Add/Program.lean

blog.zksecurity.xyz/posts/clean/

However, privacy requires reasoning about how information spreads between parties. That's where epistemic logic comes in.

Epistemic Approach

SNARKs are usually defined in a complexity/computational model: "Given a polynomial-time attacker, the probability of breaking soundness is negligible."

The epistemic approach models things more binarily: "Without learning the private key, I can't break the ciphertext."

Process

What Should Complete Tooling Look Like?

(Rajaona, Boureanu, Ramanujam, & Wesemeyer, 2024) is a proof of concept for what full-fledged epistemic logic tooling can achieve. Existing tools have these limitations:

The paper demonstrates cases like private authentication protocols, where messages are sent to designated targets.

sequenceDiagram
    participant A as Alice
    participant B as Bob
    participant C as Charlie

    A->>B: $$\{ pubk(A), N_A \}_{pubk(B)}$$
    A->>C: $$\{ pubk(A), N_A \}_{pubk(B)}$$

    alt β holds (C = B and $$pubk(A) ∈ S_C$$)
        B->>A: $$\{ N_A, N_C, pubk(B) \}_{pubk(A)}$$
    else ¬β
        C->>A: $$\{N\}_K$$
    end

Performance Note

51 minutes for some Basic-Hash cases, with timeouts beyond that.

Notes

Note that unlinkability is defined differently in different contexts. In epistemic models, unlinkability means there's no way to tell which world you are in. In Tornado Cash, unlinkability is defined probabilistically: it's unlikely to link participants in the anonymity set if they follow best practices.

Defining ZKPs in Epistemic Logic

(Costa & Brogi, n.d.) specifies security properties for ZKPs.

Broken Key Protocol: V has two keys, one of which is compromised. P has the compromised key and proves it to V without revealing which key is compromised.

sequenceDiagram
    participant P as Prover (P)
    participant V as Verifier (V)

    P->>V: ∗
    V->>V: Generate fresh m
    V->>P: enc(k₁, m), enc(k₂, m), h(m)
    P->>P: check(enc(k₁,m), enc(k₂,m))
    P->>V: m

What's Missing?

The Dolev-Yao (DY) attacker model. Modeling DY attackers appears to involve additional complexities.

Tornado Cash-Like Situations

What's required?

Other Concerns

What Do Real Tornado Cash-Like System Hacks Look Like?

Real-world Tornado Cash-like systems have had critical bugs, though often different from what epistemic verification would catch.

February 2026: Foom and Veil Cash - Skipped the trusted setup entirely. Foom, Veil

Zcash Wallet App Threat Model

The threat model doc lists many privacy related invariants. These might be in interest to model in epistemic logic.

Bibliography

Costa, G., & Brogi, C. P. (n.d.). Toward Dynamic Epistemic Verification of Zero-Knowledge Protocols.
Rajaona, F., Boureanu, I., Ramanujam, R., & Wesemeyer, S. (2024). Epistemic Model Checking for Privacy. 2024 IEEE 37th Computer Security Foundations Symposium (CSF), 1–16. Enschede, Netherlands: IEEE. https://doi.org/10.1109/CSF61375.2024.00020