Computing on Secrets
- Verification vs Execution
- Homomorphic Encryption: Arithmetic on Ciphertext
- Secure Multi-Party Computation
- Trusted Execution Environments
- The Edges: PIR and Differential Privacy
- Resistance at the Compute Layer
Verification vs Execution
Zero-knowledge proofs operate on a result that already exists. Some machine has performed the computation on inputs that were visible to some party. A zero-knowledge proof certifies the correctness of what was computed and says nothing about who saw the inputs along the way. Zero-knowledge is the verification primitive.
Computing on secrets is the execution primitive. It answers a different question: can a computation be performed when no single party is permitted to see the inputs? The answer is yes, and it has only become practical within the last decade. Fully homomorphic encryption performs arithmetic directly on ciphertexts. Secure multi-party computation splits a computation across participants so that no participant sees the whole input. Trusted execution environments run computation inside a hardware enclave opaque to the operating system and to the machine’s owner.
The two primitives compose naturally. A pipeline that protects privacy end to end typically uses computing on secrets to operate on private data and zero-knowledge to prove the result was produced honestly. Verification and execution are complementary halves of the same problem.
Consider a voting system in which each voter’s choice must remain private and the tallied result must be publicly verifiable. The inputs (individual ballots) must stay secret. The output (the tally) must be computable and verifiable. Plain encryption handles secrecy at rest and leaves secrecy in use untouched. If ballots are encrypted and sent to a counting service, the counting service must decrypt them to count them, which means at least one party has access to every individual ballot. The architecture has moved the problem from transport to a processing node, and left it there.
Zero-knowledge proofs handle verifiability of the output while leaving the privacy of the inputs to some other mechanism. A zk-proof can certify that a tally was computed correctly from some set of ballots. The proof ensures only that the tally corresponds to them, leaving open the question of who saw the ballots along the way. Privacy of inputs is a separate requirement. Computing on secrets handles it directly. Homomorphic addition lets the counting service sum ballots in ciphertext form without decrypting any individual ballot; the final tally ciphertext is decrypted once by a distributed threshold of key holders. Multi-party computation distributes the counting across nodes so that no single node sees any voter’s choice. A trusted execution environment runs the count inside an enclave that refuses to disclose its memory to any external party, and provides an attestation that the canonical counting software ran on the expected inputs. Ballots stay private through computing on secrets; the tally is publicly verifiable through zero-knowledge. Neither family alone solves the problem. Their combination does.
Homomorphic Encryption: Arithmetic on Ciphertext
Homomorphic encryption permits operations on ciphertexts that correspond to operations on the underlying plaintexts. If Enc(a) is the encryption of plaintext a, and Enc(b) is the encryption of b, then a homomorphic addition produces a ciphertext whose decryption equals a + b, without any party learning either value in plaintext form. Stated plainly the construction sounds close to impossible, and it is achievable only under specific mathematical assumptions.
Early schemes supported one operation only. RSA supports homomorphic multiplication as a side effect its designers did not advertise. The Paillier cryptosystem supports homomorphic addition. Partially homomorphic schemes have been used for narrow applications for decades, including e-voting and private information retrieval.
Craig Gentry’s 2009 dissertation gave the first construction of a scheme capable of both addition and multiplication on arbitrary circuits. The original construction was roughly a billion times slower than the equivalent plaintext computation, which ruled it out for any serious application. What mattered was the proof of existence itself. Fifteen years of engineering since have cut the overhead by roughly four orders of magnitude, moving narrow applications into feasibility while keeping general-purpose computation out of reach.
Present deployments fall into categories whose common property is that narrow usefulness beats broad slowness. Encrypted machine-learning inference lets a client send an encrypted input to a service and receive an encrypted output without the service seeing the input; the applications include encrypted medical diagnosis and financial-risk scoring. Private information retrieval lets a client query a database without revealing which record it wants. Encrypted databases let a service host encrypted customer data and answer queries over it without decrypting. Homomorphic encryption still falls short of fully general-purpose computation at acceptable speed. The overhead remains large enough that applications are chosen for their tolerance for latency and their limited complexity.
The structural claim underneath any specific application is simpler. Homomorphic encryption breaks the assumption that the service running a computation must see the data on which the computation runs. That assumption had been treated as a law of cloud architecture for two decades. Homomorphic encryption denies it. The service and the user become parties to an exchange in which the service performs a function and the user retains epistemic ownership of the inputs. The contract structure that was previously infeasible is now infeasible only at certain workload sizes, and the set of feasible workloads expands each year.
Secure Multi-Party Computation
Multi-party computation (MPC) addresses a different problem with a different architecture. Multiple parties each hold private inputs; they wish to jointly compute a function of those inputs and to learn only the output. No party learns any other party’s input beyond what the output itself reveals.
The classical constructions rely on a small family of cryptographic primitives. Secret sharing, studied by Shamir in the 1970s, splits a secret into shares such that any threshold number of shares reconstructs it and any smaller subset reveals nothing. Garbled circuits, introduced by Yao, let two parties evaluate a Boolean circuit where one party encrypts the circuit and the other evaluates the encryption; each gate’s behavior is correct while its intermediate values stay hidden. Oblivious transfer lets a party receive one of several values chosen by another party without the sender learning which value was chosen.
The single most successful MPC deployment is threshold signing, and it has reached institutional scale. A threshold signature scheme distributes the private key across multiple parties such that signing requires a threshold of them to cooperate; no subset below the threshold can produce a valid signature. No party ever holds the full private key, so a breach of any individual party does not compromise the system. Fireblocks, the largest institutional custody provider, operates threshold-signed wallets for roughly two thousand customers including major banks and payment processors. The architecture is the basis on which several regulated institutions extended credit into the crypto-asset space at all, because the alternative of a single-party custodian required levels of counterparty trust they were not prepared to extend.
Two Schnorr-based constructions have made threshold signing directly visible on Bitcoin. MuSig2, specified in BIP 327, aggregates an arbitrary number of signers into a single public key and a single signature indistinguishable from a normal single-party Schnorr signature. FROST extends the same approach to t-of-n threshold signing, so any t of the n key shareholders suffice to produce a signature and the protocol tolerates up to n minus t shares going offline or hostile without losing liveness. Both became usable on Bitcoin with Taproot activation in November 2021. Chain analysis cannot distinguish a ten-of-ten MuSig2 transaction from a single-key transaction, nor a three-of-five FROST spend from either. The architecture delivers the distributed-trust guarantee of MPC and the base-layer privacy of Bitcoin in one construction.
MPC distributes trust across parties who do not trust one another, and the distributed trust produces a cryptographic guarantee that no single party can violate. This is a different guarantee from what homomorphic encryption provides. Homomorphic encryption lets one party execute a computation on another party’s private input. Multi-party computation lets several parties execute a computation on their own private inputs, with no party learning any other party’s input. The two primitives solve complementary problems, and many real systems combine them. The cryptographic guarantee replaces a coordination cost that was previously priced into every cooperative computation: the cost of establishing enough trust to share the inputs. When that cost falls, cooperation expands to cases that were previously uneconomic.
Trusted Execution Environments
Trusted execution environments (TEEs) take a different architectural path. Where cryptographic means prevent the computing party from seeing the inputs, a TEE achieves the same outcome through hardware isolation. A region of memory inside the processor is marked as an enclave; code running inside the enclave decrypts inputs and produces outputs, while any code running outside the enclave (the operating system included, and every other process on the host alongside it) cannot observe enclave memory. An attestation mechanism lets a remote party verify that a specific piece of software is running inside an authentic enclave on an authentic processor.
TEE deployment at consumer scale breaks into two patterns. The first is the device-local enclave. ARM TrustZone on Android phones and Apple’s Secure Enclave on iOS devices, together with the TPM 2.0 specification that ships on every modern PC, place a hardened execution environment inside hundreds of millions of consumer devices, where it holds biometric templates, payment-credential keys, disk-encryption keys, and attestation material. The device-local enclave is the oldest and most widely deployed consumer TEE pattern, and it is the least controversial because the user and the enclave live in the same physical device under the same owner.
The second pattern is the remote attested enclave. A consumer device sends a request to a cloud-operated server whose hardware and software can be cryptographically attested and whose code is published for inspection. Each server’s software is cryptographically measured. The measurement is published, and the client device refuses to send a request to any server whose measurement does not match a published image. The servers are built to hold no persistent state beyond the request’s lifetime, so that even a full compromise of a server at a later time yields no earlier request’s inputs. Signal’s Private Contact Discovery was an early instance of this pattern. Apple Private Cloud Compute, introduced in June 2024 to handle Apple Intelligence workloads exceeding on-device capacity, is the most consumer-visible deployment to date and has been published in the most architectural detail.
The promise of the architecture is that a cloud service can be operated in a way that the service itself cannot observe user inputs, which is the same structural claim as homomorphic encryption achieved through different means. The limit is that the guarantee depends on the provider’s honest publication of server images and on the assumption that no side-channel attack breaks the enclave’s hardware isolation. Trust is reduced to a narrower surface, and still required on that surface. The user has traded trust in a provider’s operational practices for trust in its hardware-software supply chain and attestation discipline. Whether the new trust assumption is preferable depends on the user’s model of the provider’s incentives.
TEE-based privacy scales the most easily today. Homomorphic encryption remains expensive for general computation. Multi-party computation requires at least two non-colluding parties, and organizing them is a social problem as much as a technical one. TEEs run at plaintext speeds on general-purpose workloads, and the cryptographic overhead is limited to attestation and to the isolation boundary. The primitive fits the cloud-service business model almost exactly. Its history of side-channel attacks against SGX, however, shows that the hardware-integrity assumption remains actively contested. The Spectre and Meltdown families, along with the successor vulnerabilities published every year since, show that the contested boundary extends across the entire processor design. A TEE-based privacy claim must be evaluated against the specific hardware vendor’s track record, the specific attestation architecture it uses, the specific software image published for verification, and the specific side-channel posture of the CPU on which the enclave runs. TEEs are useful when the alternative is no privacy guarantee at all, and they are inferior when the alternative is a cryptographic primitive that does not require trusting hardware.
The Edges: PIR and Differential Privacy
Two further primitives complete the picture at the edges of the design space.
Private Information Retrieval lets a client retrieve an item from a server’s database without the server learning which item was retrieved. The query index is the private input; the item is the output. The server performs a computation over its entire database that depends cryptographically on the query but preserves no observable trace of which element the query selected. Information-theoretic PIR replicates the database across multiple non-colluding servers; computational PIR operates with a single server and relies on additively homomorphic or fully homomorphic encryption. The cost is that every query must touch every record in the database, which is why computational PIR took two decades of engineering to become practical. The applications that matter are the ones in which the query is itself the information. Bitcoin light clients are the clearest case: BIP 37 bloom filters leak which addresses a wallet cares about, and BIP 158 compact block filters leak less but still allow traffic analysis. A PIR-based light client would let a wallet fetch block filters and transaction data without telling the serving node which addresses or transactions it is watching.
Differential privacy is a statistical guarantee about aggregated data releases. It operates on the privacy of individuals whose records contribute to a released aggregate, a different concern from protecting individual records during a live computation. The mechanism is randomization: a differentially private release looks essentially the same whether any specific individual’s record was included. An adversary who sees the output cannot tell whether any particular record was present, up to a multiplicative factor parametrized by epsilon. Small epsilon gives strong privacy at the cost of accuracy. The U.S. Census Bureau applied this to the 2020 redistricting release through the TopDown Algorithm. Apple uses local differential privacy for aggregate telemetry on iOS. Differential privacy composes across queries: two releases at epsilon₁ and epsilon₂ reveal information at combined level epsilon₁ plus epsilon₂, which means the privacy budget is a scarce resource that real deployments must allocate. The guarantee operates on the individual’s contribution to an inference and leaves the inference itself visible; a DP-released statistic will still reveal that everyone in a group buys a product, and any individual known to be in that group will be implicated by the population-level pattern.
Resistance at the Compute Layer
A system’s security is measured by the cost required to compromise it. Computing on secrets extends that measurement to the compute layer of the stack, which had previously been free to observe.
Homomorphic encryption’s resistance surface is the set of cryptographic assumptions underlying the specific scheme in use. The assumption set is the same one underlying the post-quantum standards: learning-with-errors for lattice schemes, the hardness of decoding random linear codes for code-based schemes. A break in a specific scheme would affect the applications using that scheme; it would not break the family, because alternative schemes under different assumptions exist.
Multi-party computation’s resistance surface is the threshold model. The guarantee holds if and only if fewer than the threshold number of participants collude. A threshold must be chosen large enough that collusion is infeasible, and the participants must be chosen to have diverse incentives. The construction binds together through the social structure of the participant set as much as through the cryptographic math.
Trusted execution environments have the largest resistance surface. Hardware vendor integrity, attestation mechanism correctness, side-channel resistance, and software-image verification are each separate assumptions, and the composed guarantee is the conjunction of all of them. A break in any single assumption compromises the whole. The empirical track record includes multiple documented side-channel attacks, and the architectural response has been to rebuild the hardware where software patching was insufficient. History shows both that the primitive is contestable and that the contesting process is active.
Compositions inherit this logic. An attacker must compromise each primitive in the composition, and the compromise cost compounds. The defender’s architectural choice is the selection of primitives whose compositions raise the compromise cost above the adversary’s willingness to pay. Computing on secrets promises something narrower than invulnerability and more useful than cryptographic folklore: the cost curve of observation now bends upward at the execution layer, which is the layer the adversary had previously assumed was free. That bending is the principal achievement of the last decade of cryptographic and hardware-security engineering, and it is the architectural basis on which the privacy-preserving applications of the next decade will be built.