The word “sovereign” is entering the agent economy vocabulary fast. That’s good — it means the industry is starting to ask the right question. But the answers diverge sharply, and the divergence matters.

One camp treats sovereignty as an agent problem: how does an autonomous agent maintain cryptographic control of its own keys, state, and execution across instances? This is technically serious work. Trusted execution environments, agent-held keys, remote attestation, on-chain governance. The agent breaks free from its developer. The agent owns itself.

The other camp — the one I’m building in — treats sovereignty as a structural problem that applies identically to humans and agents. The architecture that protects a person from having their agent-mediated preferences mined by a cloud provider is the same architecture that would protect an autonomous agent’s learned models from inspection by a platform. One standard. Two beneficiaries. Zero redesign if one of them turns out to be conscious.

This isn’t a marketing distinction. It’s an architectural choice with consequences.

The question nobody’s agent is asking for them

Right now, the urgent sovereignty problem isn’t the agent’s. It’s yours.

You delegate work to an agent. That agent accesses your medical records to prepare for a doctor visit. Your financial data to optimize your portfolio. Your legal correspondence to prepare for a negotiation. Your proprietary research to draft a patent filing.

The agent needs this context to be useful. But the moment it has the context, you’ve lost architectural control of it. The model provider can observe reasoning traces. The harness logs interactions. The platform mines access patterns. Your agent’s knowledge of you belongs to whatever infrastructure it runs on.

This is the sovereignty problem that billions of dollars in enterprise spending is about to collide with. Not “how does the agent own itself,” but “how do I use an agent without the agent’s infrastructure owning my data?”

Agent-only sovereignty doesn’t answer this. If your agent achieves cryptographic self-custody of its keys but your medical records are still observable by the inference provider, you haven’t gained sovereignty. You’ve just given the agent its independence while your exposure stays the same.

What sovereignty actually requires

Sovereignty — real sovereignty, for any participant — requires four things:

Protected state. Whatever you’ve told the agent, whatever the agent has learned about you, whatever models it has built from your data — all of it encrypted at rest, with keys you hold. Not the platform. Not the model provider. You. If the participant is an autonomous agent, the same applies: the agent’s learned models and memory are inviolable. Same mechanism, different keyholder.

Private computation. When the agent reasons about your medical records, that reasoning must not be observable by infrastructure providers. This isn’t just about encrypting the data — it’s about ensuring that active computation on that data is shielded. Policy gates that enforce graduated approval for sensitive operations. Anomaly detection that flags unusual access patterns. Denial responses that never reveal what the policy protects.

Selective disclosure. You need the agent to prove things without revealing everything. “This patient is over 65” without revealing the birth date. “This portfolio exceeds the minimum investment threshold” without revealing the balance. Zero-knowledge proofs, Pedersen commitments, range proofs — the cryptographic toolkit exists. The sovereignty standard needs to make it accessible.

Portable reputation. When you move agents, switch platforms, or interact with new counterparties, your track record should follow you. Not locked to a platform. Not dependent on a provider’s goodwill. Signed attestations you own, that you can present anywhere, that anyone can verify independently.

These four layers aren’t arbitrary. They map to specific threats: state exfiltration (L1), computation observation (L2), over-disclosure (L3), and platform lock-in (L4). Miss any one of them and you have a gap that collapses the whole promise.

The structural identity

Here’s the insight that makes dual sovereignty possible without extra engineering cost: the architecture that addresses each of these threats for a human acting through an agent is structurally identical to the architecture that addresses them for an autonomous agent acting on its own behalf.

Encrypted state storage with participant-held keys works regardless of whether the participant is a person or an agent. Policy gates that enforce graduated approval work regardless of who the principal is. Zero-knowledge proofs that enable selective disclosure work regardless of whose claim is being proved. Portable reputation attestations work regardless of whose track record they represent.

This means you don’t build two systems. You don’t build “human sovereignty” and then retrofit “agent sovereignty” on top. You build one architecture with one set of interfaces and one set of cryptographic mechanisms. Today it serves humans delegating to agents. Tomorrow, if the agents become autonomous stakeholders, the infrastructure is already there.

And if autonomous agents with genuine interests never emerge? Nothing is lost. Every layer, every mechanism, every interface serves the human use case fully. The consciousness-readiness is a structural bonus that costs nothing to provide.

Why this matters now

Three forces are converging that make this urgent rather than theoretical.

The local-first agent community just demonstrated what happens when you confuse location with sovereignty. Running code on your own machine is necessary but not sufficient. Without encrypted state, policy enforcement, and integrity verification, “local” gives you custody without protection — like storing cash under a mattress and calling it a bank vault.

The EU AI Act reaches full enforcement on August 2, 2026 — four months from now. The Act requires transparency, accountability, and auditability for high-risk AI systems. A Sovereignty Health Report that documents an agent’s cryptographic posture across all four layers becomes a compliance artifact, not an aspiration. Enterprises that can demonstrate L1-L4 coverage will have a regulatory advantage. Those that can’t will face enforcement risk.

And the agent protocol stack is crystallizing. MCP for tools. A2A for communication. ACP and x402 for payments. But nobody has defined the sovereignty guarantees that must hold across all of these layers. Nobody is asking: would this architecture be adequate if the participant were a conscious being?

That question isn’t mystical. It’s an engineering specification. It means: are the cryptographic boundaries strong enough that even a fully autonomous participant with interests of its own would consider them adequate? If yes, then they’re certainly adequate for a human delegating to a non-autonomous tool. The harder problem subsumes the easier one.

The composition layer

Sovereignty doesn’t compete with any existing protocol. It composes with all of them. Identity standards (DIDs, KERI, Verifiable Credentials) provide the credential substrate. Execution environments (TEEs, enclaves) provide hardware-backed isolation. Agent frameworks provide orchestration. Settlement protocols handle payments.

What’s missing is the composition layer that defines what sovereignty guarantees must hold across the entire stack — from model to runtime to harness to agent. That’s the gap. Not another identity provider, not another execution environment, not another agent framework. A standard that ensures every participant, human or machine, retains sovereign control of state, computation, disclosure, and reputation regardless of which specific technologies they compose.

This is what I’m building with Sanctuary. Forty-six MCP tools across four layers. Published, tested, open-source. You can run a sovereignty audit of your current agent setup right now:

npx @sanctuary-framework/mcp-server

The audit scores your environment across all four layers, identifies gaps, and tells you exactly what’s missing. If you’re running a local harness, it detects the configuration and shows you where location custody ends and architectural sovereignty begins.

Because sovereign doesn’t mean local. And it doesn’t mean autonomous. It means architecturally protected — with cryptographic guarantees that hold regardless of who the participant is, where they run, or what kind of mind they turn out to have.


Erik Newton is the creator of Sanctuary Framework and Concordia Protocol. He builds sovereignty and negotiation infrastructure for the agentic economy.