We're interested in finding subtle bugs in Tornado Cash-like applications, Semaphore, and Privacy Pools. For example, what if a developer accidentally asks a user to input their deposit index into the Solidity withdrawal function? This is plausible when developers are building quickly. Can we automatically detect this without relying on manual checks?
Automation could be even more valuable when designing complex privacy applications, such as the old Unirep.
Recent advances allow us to handle completeness and soundness by reasoning about program inputs and outputs.
github.com/zksecurity/evm-asm/blob/main/EvmAsm/Evm64/Add/Program.lean
blog.zksecurity.xyz/posts/clean/
However, privacy requires reasoning about how information spreads between parties. That's where epistemic logic comes in.
SNARKs are usually defined in a complexity/computational model: "Given a polynomial-time attacker, the probability of breaking soundness is negligible."
The epistemic approach models things more binarily: "Without learning the private key, I can't break the ciphertext."
(Rajaona, Boureanu, Ramanujam, & Wesemeyer, 2024) is a proof of concept for what full-fledged epistemic logic tooling can achieve. Existing tools have these limitations:
The paper demonstrates cases like private authentication protocols, where messages are sent to designated targets.
sequenceDiagram
participant A as Alice
participant B as Bob
participant C as Charlie
A->>B: $$\{ pubk(A), N_A \}_{pubk(B)}$$
A->>C: $$\{ pubk(A), N_A \}_{pubk(B)}$$
alt β holds (C = B and $$pubk(A) ∈ S_C$$)
B->>A: $$\{ N_A, N_C, pubk(B) \}_{pubk(A)}$$
else ¬β
C->>A: $$\{N\}_K$$
end
51 minutes for some Basic-Hash cases, with timeouts beyond that.
Note that unlinkability is defined differently in different contexts. In epistemic models, unlinkability means there's no way to tell which world you are in. In Tornado Cash, unlinkability is defined probabilistically: it's unlikely to link participants in the anonymity set if they follow best practices.
(Costa & Brogi, n.d.) specifies security properties for ZKPs.
Broken Key Protocol: V has two keys, one of which is compromised. P has the compromised key and proves it to V without revealing which key is compromised.
sequenceDiagram
participant P as Prover (P)
participant V as Verifier (V)
P->>V: ∗
V->>V: Generate fresh m
V->>P: enc(k₁, m), enc(k₂, m), h(m)
P->>P: check(enc(k₁,m), enc(k₂,m))
P->>V: m
The Dolev-Yao (DY) attacker model. Modeling DY attackers appears to involve additional complexities.
What's required?
Real-world Tornado Cash-like systems have had critical bugs, though often different from what epistemic verification would catch.
February 2026: Foom and Veil Cash - Skipped the trusted setup entirely. Foom, Veil
The threat model doc lists many privacy related invariants. These might be in interest to model in epistemic logic.
- Can't tell what the user's current shielded balance is (aside from it being zero when the wallet is created)
- Can't learn information about the value, memo field, etc. of shielded transactions the user sends/receives.
- Can't find out one of the user's wallet addresses unless the user has given it out or the adversary has a guess for what it is.
- ...