The end of "trust me bro" - confidential computing for everyone
- Confidential computing history
- The second part that makes the magic
- Enough with the theory
- The (bad) trust chain
- So what?
Since the dawn of computing we’ve been handing over our data to machines we don’t control. First it was the mainframe in the basement, then the server in the data center, now the container in someone else’s cloud. The bargain has always been the same: convenience in exchange for trust. You trust that the admin won’t peek at your data. You trust that the code running is what they claim. You trust that nobody compromised the infrastructure while you weren’t looking.
This trust is unverifiable. You simply hope.

For most applications this is fine. Nobody cares if their todo list app operator is honest. But some data is different. Bitcoin private keys, medical records, your naked selfies (who am I kidding). For these, “trust me bro” has always been a deeply unsatisfying answer.
But it doesn’t need to be this way.
Confidential computing history
Confidential computing is not a new thing. Intel SGX’s has existed long enough to be deprecated. It stands for Software Guard Extensions, and while it was far from great (unless you measure greatness in the amount of vulnerabilities) it kickstarted the confidential computing movement. It was also used for DRM so let’s just say that it was not winning hearts and minds. Its design was also severely limited so it was more of a specific use case tool than a general solution.
Lucky for us things have evolved a bit over the years. We now have AMD SEV-SNP and Intel TDX, both of which enable a new era of confidential computing - guest vm isolation.
The core idea is simple. The CPU creates an encrypted virtual machine that nothing else can read. Not the operating system. Not the hypervisor. Not the cloud provider’s staff. Not other processes on the same machine. The encryption keys exist only inside the CPU silicon and never leave it. That means data encryption in use, which closes the trifecta of data encryption.

The second part that makes the magic
Running things confidentially is good, but if you run malicious code (looking at you DRM) instead of what you wanted the code itself can still leak your secrets. You need a second piece to seal the deal. You need to be able to ensure that a specific version of your code runs in the TEE (short for Trusted Execution Environment).
This is where remote attestation comes in.
A confidential vm is started inside a TEE. The platform computes measurements of the initial state it cares about (initial memory, config, data etc) and creates an attestation report containing those measurements and other metadata. The report is signed with a hardware-rooted attestation key, which only exists within the TEE. This is also where most of the trust assumptions in this setup lies - that hardware vendor didn’t leak or sell the root keys validating the certificate chain.
What this process enables us is to ensure that we can be certain that a specific version of the software (or a specific docker image) is running there. Which also means that for verification to be possible the software needs to be open source (or at least you need to have access to it).
Enough with the theory
I’ve been thinking about this stuff for a while, initially through the lens of confidential data vending machines for nostr during SEC-01. I have since then spent a lot of time digging into confidential computing and especially confidential inference that was enabled with the Blackwell generation of Nvidia chips. But as they say: show, don’t tell.
Last week I’ve announced the first cashu mint in TEE :
nostr://nevent1qqsp9rnxv5d6hq8465attxc6luywdxdzekleztkr7e782td4xwuy2fg285jcl
Cashu mints are custodial by design. You deposit bitcoin, you get ecash tokens. The mint holds your funds. Well technically whoever runs the bitcoin backend of the mint holds the funds. But a lot of things could still go wrong. Mints can get compromised. Operators can install tracking software. Or maybe they are just running an old vulnerable version of the mint software. Those are just a few examples of why ensuring that the mint you are connecting to is running a specific and even better, vetted version of the mint software is very beneficial.
With a TEE mint, the game changes. Users can request an attestation report, verify the signature and confirm the mint is running exactly the code it claims. And that it’s running in a confidential container, ensuring that things are not leaking anything. Not because the operator said so, but because cryptography proved it.
Now replace cashu mint with anything else you care about. You could run a Lightning node where even your hosting provider can’t access the keys. Or a multisig coordinator. Custodians that prove their security posture cryptographically. Not “we passed an audit” but “here’s proof our systems are configured correctly right now.”. Run AI inference on private data without exposing inputs to the model operator. Prove which model version generated an output.
VPNs that cryptographically prove they don’t log. Messaging relays with verifiable no-retention.
No more policy documents and pinky promises.
The (bad) trust chain
Yes, there is some trust involved. It’s not a panacea, it’s just better than what we have now.
When you verify an attestation report, you’re trusting:
- Manufacturers root of trust, revocation and TCB reporting
- The expected measurements you’re comparing against are the real ones
- Attestation service, if you’re using it instead of doing the full verification yourself
(if you want to skip the really geeky part scroll down to So what?)
Confidential containers on Azure
So much for the theory. How does this work in practice?
I built Nutshell TEE on Azure Confidential Containers, which run on AMD SEV-SNP hardware. Azure handles a lot of the complexity, but understanding the pieces helps.
A confidential container group on Azure consists of multiple containers running together inside a single TEE boundary. The entire group shares the encrypted memory space. In my case, the architecture has four containers:
The SKR sidecar handles attestation. It talks to the AMD hardware to get the raw attestation report, then sends it to Microsoft’s Azure Attestation service (MAA) which validates the report and returns a signed token. This token is what external clients verify.
The encfs sidecar mounts encrypted storage. The database lives on a LUKS-encrypted filesystem, and the decryption key can only be released inside the TEE.
Caddy handles HTTPS termination. And the Nutshell mint runs the actual Cashu protocol.
So the deployment is pretty heavy in terms of infrastructure. It is much easier to deploy stateless services (you get rid of the encfs sidecar and the complexity around that).
Verifiable software identity
The clever part of Azure’s implementation is how they handle software identity.
Before deployment, you define a policy that specifies exactly which container images are allowed to run, down to the layer digests. This policy gets hashed, and that hash (Azure calls it “hostdata”) becomes part of the attestation token.
When a client wants to verify the mint:
1. They fetch the attestation token from the mint
2. They verify the token’s signature against Microsoft’s public keys
3. They extract the hostdata hash from the token
4. They compute the expected hash from the published policy
5. If the hashes match, they know the exact code running
The policy is public, in the git repo. Anyone can compute the expected hash and compare it against what the attestation token reports. If they match, you have cryptographic proof that the mint is running exactly the code you see in the repository.
Key release
The encryption key for the database adds another layer.
It’s an RSA-HSM key stored in Azure Key Vault with a release policy tied to the hostdata hash. If anyone tries to access that key from outside the enclave, or from an enclave running different code, Key Vault refuses. The key literally cannot be released unless the attestation claims match the policy.
This means even if someone steals the encrypted database blob, they can’t decrypt it without running the exact same code in a genuine TEE. The cryptographic binding between “what code is running” and “what secrets are accessible” is enforced by hardware and the key vault, not by hopes and prayers.
So what?
Bitcoin gave us verifiable money. Confidential computing gives us verifiable execution. And a confidential one at that. With the advancement of confidential containers and broader availability of hardware (for example you can rent a server that supports AMD SEV-SNP for 200$/month) things have become much easier. With AWS Nitro or SGX based solutions you generally need to architect your application for the specific platform which causes a lot of overhead (and vendor lock in). With confidential containers you can theoretically just throw any container in it and it will work. In practice you will need to at least figure out how to handle storage persistence and make sure that your builds are reproducible. But that’s nothing compared to needing to fit your entire app into 256MB(sgx) or make it work over unix socket communication (aws nitro).
In the example above I used azure just because the infrastructure is already deployed and set up. But you could deploy a similar type of infrastructure anywhere and self host the complete confidential containers stack. Ideally more infrastructure providers would agree on using exactly the same stack (big cloud providers inevitably tie it into their existing authentication which makes that part tied to the platform) so that users would have an easy way switching between providers if necessary.
We should make this a default for any project or developer in the space, not a privilege used by well funded institutions like Signal. What we need is to make it very simple for every app developer and infrastructure operator to make use of these technologies, make it easy to devs to integrate attestation libraries in their wallets and clients and harden the infrastructure across the entire decentralized universe.

Reading materials:
- https://confidentialcontainers.org/
- https://ungovernable.tech/ConfidentialComputing.html
- https://github.com/aljazceru/nutshell-azure-tee
Highlights (1)
Not "we passed an audit" but "here's proof our systems are configured correctly right now.". Run AI inference on private data without exposing inputs to the model operator. Prove which model version generated an output.
Valid argument, but thats kind the point. If you need a solution for a very motivated state actor than israeli pagers are a good example of how fuck we are 😅
The way I see it is it removes some of the trust required in random infrastructure provider or organized crime. Early in bitcoin history we had issues where sysadmins would rug bitcoiners using their hosting providers. It also provides some shield against the liability of (accidental) data collection and law makers deciding that running specific things is illegal.
Theres always market forces at play - how much is one willing to risk to burn down a chip manufacturer by potential exposure. But in any case this iis not a solution to bet your kids life on when you are the most wanted person in the world.
You have just proven my point
Its a bit too Clipper Chip for me, unless I can burn my own hardware key into the CPU.
As it stands, the root of trust is a “trust me bro” from a major manufacturer.
Jim the Ln noderunner isn’t worth the NSA’s time to snatch and put in a small room with a bright light in his face.
But AMD and Intel CEOs certainly are worth their time. Not that the NSA would have to - they’ll have had their own CALEA portal to generate valid keys since before the first silicon was etched.
This is a great technology for securing data and computation against unorganised crime, against Diego in Paraguay who’s always trying to send Viagra spam through my relay.
Against ideologically-motivated enemies with nation-state backing, its a trojan horse.
Instead of using permissioned Azure, did you consider Oasis ROFL? It’s not just the TEE you need. You also need your users to verify that a specific container image runs on the other side, e.g. all the upgrades being made and that the TLS tunnel actually ends in the enclave. Oasis also has an on-chain marketplace for renting the hardware settled in crypto… https://docs.oasis.io/build/rofl/
Its a very long word salad for not much said and completely missing the point @4c800...e3b2f
So bullish on that - what if we could trust our computers?
while it is kinda interesting, it’s putting trust in a place that is the worst possible place. just the mere mention of Azure sets off my alarm bells.
lightning is already secure, by game theory, cryptography and the independence of the thousands of nodes running bitcoin. it doesn’t need an enclave, neither does lightning.
and this is not just a thing with bitcoin, we have similar issues with nostr, though the value of the data is less, it’s hard to really put a price on the sovereignty that is compromised when relay operators are only providing “free” services (looking at you, primal and damus) to gather metadata. yes that was an accusation, sue me, but the evidence i have seen tells me, especially the latter, has got little birdies in his ear encouraging him to do toxic things, like during the filter wars last year.
and you expect me to believe that these extremely widely used relays are running trustworthy software? it’s not even only about the fact that the developer of the main relay used on these big relays is an absentee landlord, he’s not a nostr user, he doesn’ give a shit about the privacy or security of metadata of users, and silly people who think they are smart, are cooking up half baked protocols, the most notable example being giftwrap/seal/nip-44 encryption, which has the most retarded padding scheme ever invented, and the split-double-sent giftwraps (you have to send one to yourself as well as your conversation partner) completely destroys the majority of the protection against metadata.
and worst, it is one of the most common newbie mistakes in signals intelligence theory: encryption is always a second best option to not using a publicly visible transit for the data. or in more common terms, two people can keep a secret, if one of them is dead.
so, while it is kinda cool, it’s just more steps, complex, expensive steps, to yield a benefit that is lower in value than the cost of producing it.
bitcoin doesn’t have this problem, because its security doesn’t have any dependence on trust. it banks on the probability, that all players will not violate the protocol because doing so, destroys the value of their assets.
lightning doesn’t have this problem, because it uses source routed onion payments and HTLCs to make it difficult and again, effectively shooting yourself in the foot to try and interfere with or monitor it.
ecash and nostr relays share in common a lack of a protocol to defend user privacy. they require the software to have these things implemented. it even literally says, in NIP-42, that it can be used to enable TRUSTWORTHY relay operators to block snooping on users’ metadata, which can happen IN REAL TIME on such relays as relay.damus.io
and i don’t pick on will for no reason. the dude has proven over and over again he can’t be trusted, he doesn’t care about user’s metadata privacy, he doesn’t believe that bitcoin node runners should have any decisionmaking power about what transits their network, and he, and miljan, i don’t pick on them for no reason, promote what is effectively an ad for microsoft.
now, go think about that for a while.
https://media.tenor.com/WB0asE2__fAAAAAC/the-book-of-boba-fett-this-is-the-way.gif