Every now and then, a piece of infrastructure launches, and you can tell from the spec, the language, and the decisions made and not made that real engineering craft has gone into it.
Mastercard's Verifiable Intent (VI) is one of those.
I've been spending time with the VI specification for a few weeks now. Partly because we build infrastructure adjacent to it, and aiGUARD is an execution-control layer for AI systems, with patents pending. Partly because I want to understand what the credential layer looks like when it's done well. And partly because if our work and Mastercard's end up anywhere near each other in a production stack, I want to know exactly where we sit and where they sit.
This post is about what I've learned from reading their work closely and where I think ours fits alongside it.
What Mastercard and Google have actually done
Verifiable Intent was publicly announced on 5 March 2026, co-developed with Google, and published under Apache 2.0 at verifiableintent.dev and in the agent-intent/verifiable-intent GitHub repository. Anyone can read the spec. Anyone can implement against it. That alone is worth noting. A major payment network shipping core infrastructure as open source, rather than locking it behind a licensing firewall, is not the historical norm.
No novel cryptography. Instead, it uses a combination of SD-JWT, JWS, ES256 over NIST P-256, and RFC 7800 key confirmation. The specification is at draft v0.1, and the main branch was last updated in late April, so it's a living document, not a launch-and-freeze exercise.
The architecture comprises three layers of signed credentials, each cryptographically bound to the layer above.
Layer 1 is the identity credential — an SD-JWT signed by a credential provider, including a confirmation key that binds the user's public key, with a recommended lifetime of no more than a year. Layer 2 is the intent credential, signed with that user key, either in Immediate mode (final transaction values) or Autonomous mode (constraint-bearing mandates that delegate to an agent key). Layer 3 is the fulfilment credential, signed by the agent at transaction time and split into two views: L3a for the payment network and L3b for the merchant, both binding back to the user mandate through selective disclosure.
That privacy boundary between the payment rail and the merchant, enforced through selective disclosure rather than policy, is the kind of design decision you make only if you're thinking about this seriously.
The constraints specification defines eight registered types covering amount ranges, allowed merchants, allowed payees, line items, budgets, recurrence, and transaction references. Quantitative constraints are machine-enforceable at verification time. Verifiers must support all registered types, and any unknown constraints in open mandates must be rejected. This is not a loose framework. It's a precise one.
What I noticed reading it closely
What's more striking than what VI does is what it deliberately omits. The specification is precise about its scope: transport protocols, key provisioning, credential provider enrolment, agent platform APIs, dispute resolution, and regulatory compliance mapping are all explicitly out of scope. The PSD2/SCA references include a disclaimer: informational only, no compliance claims made, not legal advice. The design rationale openly acknowledges that VI does not replace SCA.
That's not a weakness. It's the opposite. Mastercard has drawn the boundary clearly and said, in effect: we're building the credential layer properly. Others can build around us.
That posture, precise about what you do and what you don't, is how good infrastructure gets built. It's also how good infrastructure combines with other infrastructure. If VI tried to solve everything adjacent, there would be no room for anyone else to contribute. Because it doesn't, there is.
The extension point that caught my eye
Reading closely, one line in the specification overview stopped me: an optional claim called agent_attestation, currently undefined, with a note that future companion documents will define specific attestation schemes.
Undefined by design.
That single phrase tells you a lot about the engineering culture at Mastercard and Google. They've left an explicit extension point for agent identity, behavioural attestation, and proof-of-governance work, without pre-specifying what it should look like. It's an open seat at the table.
I read that and thought: the work we're doing at aiGUARD is exactly the kind of work that could fill that seat.
Where our work sits
Here's how I think about the two pieces fitting together.
VI answers: Was the user's delegated authority valid, and are the agent's actions within the constraints the user signed off? That's a credential-layer question. VI answers it with cryptographic precision, at verification time, with machine-enforceable checks against registered constraint types.
aiGUARD answers a different question: Should the AI-generated instruction that's about to enter the credential chain be delivered at all? That's a runtime-execution-control question. Our patented architecture (GB2603184.9) sits structurally between AI output generation and delivery, and evaluates each output against three parameters — confidence, consequence, and user state — before anything propagates downstream.
GEC (GB2607087.0), our per-inference cryptographic certificate, records the governance that occurred. Not a policy about governance. Evidence of governance, architecturally bound to the output.
Three layers. Three different jobs:
- VI: Was the user's delegated authority valid, and does the transaction fit within it?
- aiGUARD: Should the AI's instruction execute, given what we know about confidence, consequence, and user state?
- GEC: Can we prove governance was applied, in a form any third party can verify?
One stack. One platform. Three different questions. None of the three layers substitutes for the others.
Third-party validation of the runtime layer
This isn't just our framing. The Cloud Security Alliance published an analysis of AP2, the Google protocol VI is aligned with, in October 2025. It identified four threat categories that lie outside the credential layer: workflow hijacking via prompt injection, emergent goal misalignment, logic-bomb activation, and container escape leading to agent collusion. All are runtime phenomena. All lie outside what a credential-layer specification is designed to address.
Recent academic work makes the same point in a more technical register. One arXiv paper from earlier this year argues for zero-trust runtime verification in AP2-style systems, emphasising that authorisation should be evaluated at execution time rather than assumed from static issuance. A second paper red-teams AP2-style shopping agents and reports prompt injection as a practical, reproducible risk.
The conversation about where verification happens, at the credential layer, at the execution layer, or both, is becoming more precise, not less. VI is the clearest statement yet of what the credential layer should look like. The runtime execution-control layer is what we're building.
Something unusual about this blog post
Which brings me to one more thing.
If we are going to argue that AI-generated content should be governed at runtime and that outputs shouldn't propagate without a mandatory checkpoint, we should probably apply that argument to our own output.
So, this post is.
The draft you're reading went through the aiGUARD pipeline before publication. The confidence parameter was backed by an independent verification report produced by my colleague Henri (aka ChatGPT) (pro mode, extended thinking), cross-referenced against primary sources at verifiableintent.dev, the VI GitHub repository, Mastercard's official announcement, the Cloud Security Alliance analysis, and recent arXiv papers. The consequence parameter was set HIGH because Mastercard is a downstream partner through Clowd9, and factual accuracy matters commercially. The user state parameter was me, fully informed, reviewing, and editing before anything went live.
aiGUARD's decision: ALLOW.
GEC then issued the certificate below, on the final published text. The output_hash is a real sha256 of what you're reading. The decision record is real. The timestamp is the publication moment. It's a reference implementation, not the production IBM build that arrives in September, but everything in it is computed on the actual artefact.
Every blog and every article we publish from this point on will carry one. It's how our site operates.
If you're in the conversation
If you're implementing VI, working on agentic payments infrastructure, or thinking about where the runtime governance layer sits, please get in touch. We are at aiguard.systems, on LinkedIn and X. Clowd9's payments integration is committed; IBM's PoC runs through September 2026; and we'd like to have a conversation with more people about what a complete stack for agentic commerce looks like.
To the team at Mastercard and Google: the work is genuinely well done. Thank you for drawing the boundary cleanly, and for leaving the door open.