The Oracle Problem: Who Gets to Say What Happened?
- What the Hurdle Rate Podcast got right
- The Trust Stack
- The oracle is the unsolved problem
- Can federated guardians serve as a moral oracle?
- The case for an AI co-adjudicator
- The Trust Stack, completed
- What this means for builders
- The open question
- Further Reading
- Core Framework
- This Series
- Optional
A month ago (March 19 2026), I wrote my first substack article about:
The Moral Limits of Prediction Markets
The first question was whether prediction markets should exist at all.
That question led somewhere unexpected.
Not to a simple yes or no.
But to a deeper problem underneath.
The oracle problem. And that is a problem I was reminded of again today while on a mid afternoon run.
What the Hurdle Rate Podcast got right
A recent episode of the Hurdle Rate podcast (Episode 56) explored something worth naming directly:
STRC — Strategy’s variable-rate perpetual preferred stock — as the settlement layer for a permissionless prediction market built on Liquid Bitcoin.
The technical architecture is elegant.
STRC trades close to $100 par value. Its dividend rate adjusts monthly to maintain stability. It’s Bitcoin-native, globally accessible, and permissionless.
As a settlement asset for prediction markets, it solves the volatility problem. Participants aren’t exposed to wild swings in the collateral itself.
But the podcast didn’t ask the harder question.
What happens when the event is ambiguous?
The Trust Stack
If you have read any of my previous posts I introduced a framework for where trust lives in digital money:

• Held — Bearer assets. Bitcoin. You hold the keys. Trust lives in code and network.
• Shared — Federated systems. Liquid, Fedi. Trust lives in a known group of coordinators.
• Promised — Liability-based. Stablecoins, centralized platforms. Trust lives in an issuer.
It’s tempting to map Kalshi onto “held” — a single decisive authority. But that’s wrong.
Kalshi is promised trust. You hold a claim against Kalshi as counterparty. When their market on a political leader losing power ended in a $54 million lawsuit, that wasn’t authority breaking down.
It was a promise breaking down.
Counterparty risk — exactly what promised trust warned you about.
The oracle is the unsolved problem
Price feeds are solvable. You can verify a Bitcoin price on-chain with reasonable confidence.
But prediction markets on real-world events aren’t asking for verification.
They’re asking for interpretation.
Did the leader lose power? Was it an election, a coup, a war? Does it satisfy the contract’s intent?
No code answers that question. No price feed resolves it.
Every oracle solution proposed in the Bitcoin-native space so far answers the verification question. Nobody has solved the interpretation question — because it isn’t ultimately a technical problem.
It’s a legitimacy problem.
Someone has to say what happened. And whoever says it can be wrong, captured, or bribed.
This is why permissionless prediction markets on Liquid Bitcoin — settled in STRC — don’t escape the problem. They relocate it.
The censorship resistance that makes the architecture compelling is precisely what strips out the last guardrail against moral hazard.
Can federated guardians serve as a moral oracle?
The Fedi model — federated guardians coordinating trust across a known community — is the most promising candidate in the Bitcoin-native stack.
Guardians are already trusted with custody. They have skin in the game. They have reputational accountability to their community.
But three tensions remain:
Guardians are chosen for custody, not judgment. The trust that makes them good custodians doesn’t automatically transfer to interpreting ambiguous real-world events.
Federation size creates a trilemma. Small federations are fast but capturable. Large federations are resistant to capture but slow on time-sensitive resolutions.
Guardians have views. Fedi communities are local and high-trust — a church, a neighborhood, a conference community. For a global political prediction market, those guardians may have their own stake in the outcome. Not corruption necessarily. Just human nature.
Shared trust distributes the oracle problem. It doesn’t dissolve it.
The case for an AI co-adjudicator
Here is where the architecture gets interesting.
What if the oracle isn’t a person, a federation, or a contract alone — but a combination of federated human judgment and an AI agent, neither sufficient alone?
An AI agent brings something federated guardians cannot:
• It cannot be bribed
• It cannot be threatened
• It has no relationship with the outcome
• It applies consistent interpretive logic across cases
This directly addresses the failure mode my first post identified — a single interpreter being corrupted or captured.
But an AI agent alone isn’t sufficient either:
• Its training data can be gamed by sophisticated actors who understand the model
• It has no skin in the game and no reputational consequence for a bad call
• Participants may not accept its judgment as legitimate even when it’s correct
Together, they create something more robust.
Fedi guardians provide human legitimacy — accountability, community trust, reputational stakes.
The AI agent provides computational consistency — uncapturable, relationship-free, pattern-resistant to individual corruption.
Neither can be fully gamed without gaming both.
This is how good human institutions already work. Judgment plus rules. Neither sufficient alone.
The Trust Stack, completed

But this isn’t just a map.
These layers can be assembled.
Mapped against the framework:
• Held — Bitcoin as the ultimate reserve. The anchor that makes STRC’s promise credible.
• Promised — STRC as the settlement layer. Strategy’s balance sheet backs the dividend. Counterparty risk is real but bounded and transparent.
• Shared — Fedi guardians plus AI co-adjudicator as the oracle layer. Human legitimacy paired with computational consistency.
This is a trust stack, not a single layer.
And it suggests a more precise version of the claim from my first post:
Not everything should be priced.
But some things can be — if the trust stack is correctly assembled.
What this means for builders
STRC’s yield produces a natural cost of capital.
That cost disciplines which markets get created.
Low-signal, high-manipulation-risk markets get priced out.
Hard-to-influence, high-epistemic-value markets survive.
The architecture doesn’t eliminate the moral question.
It builds the moral question into the economics.
The open question
The oracle problem isn’t fully closed by this architecture.
The interpretation oracle requires irreducible human judgment — which means the question is always whose judgment and under what constraints.
But for the first time, there is a Bitcoin-native answer worth taking seriously:
Federated human legitimacy.
Computational consistency.
STRC as the settlement layer.
Bitcoin as the foundation underneath.
We never eliminate trust.
We only decide where to place it — and how to check it.
The oracle problem doesn’t disappear.
It condenses. It clarifies. It moves.
Interpretation still belongs to humans.
But now it can be paired with systems that make it legible, bounded, and accountable.
For the first time, the pieces can be assembled:
Human judgment.
Machine consistency.
Enforceable settlement.
Sound money beneath it all.
We never eliminate trust.
We only choose where it lives—
and whether it can hold.
Further Reading
If you want to explore the ideas behind this more fully:
Core Framework
Where Trust Lives: Held • Shared • Promised
The foundation. Where trust sits determines everything that follows.
Bitcoin, Sats, and Money as Language
Why denomination and structure shape how we understand value.
This Series
The Moral Limits of Prediction Markets
Why some things resist pricing—and where markets break.
Optional
Fix the Money, Fix the World
The broader case for monetary foundations.
Bitcoin, Generosity, and the Future
Where trust, value, and human intention intersect.
Start with the framework. Everything else builds from there.
Write a comment