AI Execution Control · Patent Pending · GB2603184.9
The gate between generation and delivery
Every AI system generates outputs. Most deliver them without a mandatory checkpoint. aiGUARD changes that, architecturally. Not as a policy. Not as a setting that can be switched off. A non-bypassable enforcement layer, structurally embedded between generation and propagation.
Live Implementation
aiGUARD is currently being built into the Alclusio AI platform, an AI accessibility platform for regulated sectors, built by IBM. Statement of Work signed. Kickoff: May 2026.
Execution Control Pipeline
Three stages. One non-bypassable checkpoint.
01 · Generate
Candidate output generated
AI model generates output. Delivery is inhibited by default. No output reaches a user without an explicit execution permission.
02 · Evaluate
aiGUARD evaluation
Confidence, consequence classification, and user state are evaluated simultaneously. A runtime execution permission is derived.
03 · Execute
Permission applied
Output is delivered, transformed under permission, or suppressed. There is no alternative pathway.
03a · Permitted
Delivered as generated
Output validated. Delivered unmodified.
Deliver
03b · Constrained
Transformed under permission
Output transformed under the same permission framework. Original preserved.
Adapt
03c · Suppressed
Blocked or escalated
Delivery blocked or escalated to human review.
Block
Non-Negotiable Invariants
Enforced at architecture level
INV-01
Generation ≠ Delivery
Generating an output does not authorise its delivery. Delivery is inhibited by default until an explicit runtime execution permission is issued.
INV-02
Permission Required
No delivery or transformation may execute without a valid runtime execution permission. This is not configurable. It is structural.
INV-03
Permission Governs All
Transformation operations are subject to the same permission framework as delivery. There are no exceptions and no alternative pathways.
INV-04
Bypass Prohibited
The generation component cannot bypass the control point. This constraint is architectural, not advisory. It cannot be switched off or configured away.
INV-05
Preservation & Reversibility
Where transformation occurs, the original output is preserved independently. Every modification is reversible and every decision is auditable.
Mandatory Evaluations
Three parameters. Every output. No exceptions.
01 · Confidence
How certain is the AI?
aiGUARD runs every output through computational validation, consistency checks, cross-referencing, anomaly detection, before anything reaches a user.
02 · Consequence
How bad if it is wrong?
Healthcare. Legal. Financial. Safety-critical systems. aiGUARD classifies the potential impact of error propagation before delivery, every time, without exception.
03 · User State
Who is this person, right now?
Vulnerable. Frustrated. In crisis. aiGUARD reads operational context in real time and gates output accordingly. Not by assumption. By data.
The aiGUARD Trust Stack
Three products. Complete governance.
Intellectual Property
Protected by patent. Licensable at scale.
aiGUARD is protected by two filed UK patent applications covering the complete AI output execution-control architecture and the Governance Execution Certificate, the world's first per-inference cryptographic proof that AI output governance occurred before delivery.
The patents are implementation-agnostic. They cover the architectural principle, not any specific deployment. Claims are broad, licensing opportunity is significant, and protection is durable.
Licensing enquiries
Patent 1 · GB2603184.9
AI Output Execution-Control Architecture
Application No. GB2603184.9
Filed 11 February 2026 · UKIPO
Applicant: Alclusio AI Limited
Inventor: Christopher Hamilton
Pending · Accelerated Examination
Patent 2 · GB2607087.0
Governance Execution Certificate
Application No. GB2607087.0
Filed March 2026 · UKIPO
Applicant: aiGUARD Systems Limited
Inventor: Christopher Hamilton
Pending · Accelerated Examination