EU AI Act · Regulatory Compliance
Articles 9, 12 and 13. Satisfied architecturally.
The EU AI Act's compliance deadline for high-risk AI systems is August 2026. Most operators are treating it as a documentation exercise. It is not. Here is how aiGUARD satisfies the Act's most demanding requirements, structurally, not on paper.
High-Risk Compliance Deadline · August 2026
3 months away. Operators must demonstrate Articles 9, 12 and 13 compliance at the point of every AI output, not in policy documents.
Why policy-based compliance will not be enough
The Act wants proof, not policy
Articles 9, 12 and 13 of the EU AI Act require demonstrable, per-interaction accountability for high-risk AI systems. Risk registers, governance committees, and policy frameworks satisfy none of them.
What the Act requires is proof. Per-inference. Externally verifiable. Produced before output delivery, not after the fact. That is precisely what aiGUARD and GEC deliver.
aiGUARD is currently being built into the Alclusio AI platform, an AI accessibility platform for regulated sectors, built by IBM. The compliance architecture described here is the same architecture being implemented under a signed Statement of Work with IBM Hursley.
9
EU AI Act · Article 9
Risk Management System
Article 9 requires operators of high-risk AI systems to establish, implement, document, and maintain a risk management system, one that operates continuously throughout the system lifecycle and identifies, analyses, and evaluates risks on a per-output basis. It requires that risk mitigation measures are applied before outputs are delivered.
Article 9 Requirement
How aiGUARD Satisfies It
Continuous risk management throughout lifecycle
Risk controls must operate on every interaction, not just at deployment.
Structural. aiGUARD is architecturally embedded between AI generation and delivery. It operates on every candidate output without exception; it cannot be bypassed, disabled, or circumvented.
Risk identification and evaluation per output
Each output must be individually assessed for its risk profile before it reaches a user.
Structural. aiGUARD evaluates three parameters simultaneously per output: Confidence (certainty of the generated content), Consequence (severity classification of potential error), and User State (real-time vulnerability and context assessment).
Risk mitigation before delivery
Controls must be applied before the output reaches the end user, not retrospectively.
Structural. Delivery is inhibited by default the moment generation completes. No output reaches any user without an explicit runtime execution permission from aiGUARD. Mitigation is structural; it occurs before delivery, every time.
Documentation of risk evaluation
Operators must demonstrate that risk evaluation occurred for each interaction.
GEC Certificate. Every aiGUARD evaluation produces a GEC: a cryptographic certificate recording the confidence score, consequence level, user state hash, and execution decision for that specific output. This is per-inference documentation by architecture.
Residual risk acceptability
Operators must demonstrate that residual risks are acceptable given the system's purpose.
Structural. The consequence classification system explicitly models the risk of delivery error for the specific use context. The execution permission (ALLOW / MODIFY / DEFER / SUPPRESS) reflects the assessed residual risk. The decision is recorded in the GEC.
12
EU AI Act · Article 12
Record-Keeping & Logging
Article 12 requires that high-risk AI systems automatically generate logs sufficient to enable post-hoc assessment of compliance. Logs must capture the period of each use, the reference database against which inputs were checked, input data where relevant, and the identity of the persons involved in verification. Logs must be retained and made available to regulators on request.
Article 12 Requirement
How aiGUARD & GEC Satisfy It
Automatic log generation
Logs must be produced automatically, not manually compiled after the fact.
GEC Certificate. GEC is issued automatically on every governed interaction. No manual logging step is required. The certificate is generated as a structural consequence of the governance process; if governance occurs, the GEC exists.
Sufficient for post-hoc compliance assessment
Logs must contain enough information for a regulator to assess whether the system operated compliantly.
GEC Certificate. Each GEC contains: output hash, execution decision, confidence score, confidence band, consequence level, user state hash, governance policy version, accessibility compliance status, timestamp, and a hardware-protected cryptographic signature. A complete per-inference compliance record.
Period of each use
Logs must capture when each interaction occurred.
GEC Certificate. Every GEC includes an ISO 8601 timestamp recording the precise moment of governance certificate issuance, which is structurally tied to the moment of delivery permission.
Externally verifiable
Logs must be made available to national competent authorities on request.
GEC Certificate. GEC certificates are cryptographically verifiable by any third party, including regulators, without requiring access to the operator's internal systems. The Ed25519 signature allows independent verification of certificate authenticity and integrity.
Identity protection in logging
Logging must not compromise user privacy or data protection obligations.
Synapse-ID. GEC records a SHA-256 privacy-preserving hash of the user state, not personally identifiable information. Synapse-ID further reduces the identity footprint of the interaction, ensuring logging is compliant with UK GDPR and EU GDPR simultaneously.
13
EU AI Act · Article 13
Transparency & Information Provision
Article 13 requires that high-risk AI systems are designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. It requires that outputs be interpretable, that the system's capabilities and limitations be disclosed, and that users be informed when they are interacting with an AI system.
Article 13 Requirement
How aiGUARD & GEC Satisfy It
Transparent operation
The system must operate in a way that deployers can interpret and appropriately use its outputs.
Structural. Every aiGUARD execution decision is accompanied by its three evaluation parameters: confidence, consequence, and user state, recorded in the GEC. Deployers can inspect any GEC to understand exactly why a given output was delivered, modified, or suppressed.
Interpretable outputs
Outputs must be interpretable; their basis and limitations must be discernible.
GEC Certificate. The confidence score and confidence band (GREEN / AMBER / RED) in each GEC explicitly communicate the degree of certainty associated with the output. A deployer or regulator can immediately assess the epistemic status of any delivered output.
Capabilities and limitations disclosure
The system's capabilities and limitations must be documented and available.
Structural. The aiGUARD architecture enforces its own limitations structurally. The consequence classification system defines the scope of outputs the system will deliver without human review. These boundaries are defined in the governance policy version recorded in each GEC.
Accessibility compliance
High-risk AI systems must meet accessibility requirements for users with disabilities.
GEC Certificate. GEC is the world's first per-inference AI accessibility certificate. Every GEC records whether the output met the accessibility policy applicable to that user. The first implementation of per-interaction AI accessibility compliance certification.
User notification of AI interaction
Users must be informed when they are interacting with an AI system.
Structural. The Alclusio AI platform, within which aiGUARD is currently being implemented, is an AI accessibility platform designed for users who may not recognise AI interaction. aiGUARD's user state evaluation includes awareness indicators as a governance parameter.
What this means for your organisation
Structural compliance, not procedural
For Regulated-Sector Operators
Every AI output is governed before delivery, not reviewed retrospectively. This is structural compliance, not procedural compliance. GEC certificates are produced automatically, no manual logging overhead, no compliance gap between interactions and records. Licensing aiGUARD means Articles 9, 12 and 13 are satisfied by your architecture.
For Legal & Compliance Teams
GEC certificates are externally verifiable without platform access. Your regulator does not need to trust your logs, they can verify them independently. The per-inference audit trail exists from day one of deployment, not built retrospectively when an investigation begins. Privacy-preserving logging satisfies both EU AI Act and GDPR simultaneously.
For Regulators & Auditors
Each GEC is a cryptographically signed record of governance: confidence, consequence, decision, and output hash for a specific AI interaction. Verification requires no system access, no operator cooperation, and no trust assumption. The mathematics of the signature is the proof. The per-inference accessibility certificate is a new standard for regulated AI deployment.