The EU AI Act's Article 9 doesn't just ask you to have a risk management system. It asks you to run one.

That distinction is the difference between a compliance programme and a compliance outcome.

Most of the AI governance industry is right now treating the Act as a documentation exercise. Inventories of AI systems. Risk classification matrices. Gap analyses. Policy frameworks. Human oversight committees. All necessary. All first-step. None sufficient on its own for what Articles 9, 12 and 13 actually require.

Start with Article 9

Article 9 requires providers of high-risk AI systems to establish, implement, document and maintain a risk management system. Pay attention to the order: the documentation isn't the risk management system. The documentation records that the risk management system exists and is operating.

The Act itself describes Article 9 as a "continuous iterative process." It requires regular systematic review and update. It requires testing throughout development and before market placement, against predefined metrics and probabilistic thresholds. Article 72 then extends the obligation into post-market life: providers must actively and systematically collect, document and analyse performance data throughout the lifetime of the system.

This is not a binder. This is an operational feedback loop.

Article 12, technical, not paper

Article 12 requires high-risk AI systems to be designed and developed so that the automatic recording of events (logs) is technically possible. Again, the language is deliberate: the requirement attaches to system design, not to a recording procedure bolted on afterwards.

The logging must support traceability appropriate to the system's intended purpose, identification of risk situations, post-market monitoring, and deployer monitoring of system operation. The Act is silent on format, no universal schema is prescribed, but the functional requirement is unambiguous: the system must be capable of producing interpretable, contemporaneous evidence of what it did and when.

Providers and deployers then keep logs under their control for a period appropriate to purpose, at minimum six months.

Article 13, transparency, operational

Article 13 requires systems to be sufficiently transparent that deployers can interpret outputs and use the system appropriately. The information provided must be relevant, accessible and comprehensible. Among the required contents: intended purpose, performance characteristics, known limitations, human oversight measures, and, critically, mechanisms allowing deployers to collect, store and interpret logs in accordance with Article 12.

Article 26 adds a deployer obligation to inform natural persons that they are subject to high-risk AI. Article 86 creates a right to a clear and meaningful explanation for certain decisions.

The pattern across all three articles: transparency, traceability and oversight are operational properties of the system, not artefacts produced once at conformity assessment.

Delay doesn't convert runtime requirements into paperwork requirements.

The Omnibus caveat

An honest article must acknowledge the elephant in the room: the Commission's November 2025 AI Omnibus proposal would push stand-alone Annex III high-risk systems from 2 August 2026 to 2 December 2027, and embedded product systems to 2 August 2028. The Council gave the green light to trilogue negotiations on 13 March 2026. It is not yet law.

The legal advice from firms like A&O Shearman has been consistent: plan against the original deadline until formal adoption. Whether the enforcement date is August 2026 or December 2027, the operational character of Articles 9, 12 and 13 does not change. Delay does not convert runtime requirements into paperwork requirements.

What operational proof looks like

An organisation satisfying Articles 9, 12 and 13 operationally, not just documentarily, can answer three questions about any specific AI output delivered to any specific user:

These are runtime questions. They require the system itself to produce contemporaneous evidence. They cannot be answered retroactively from a policy document.

The compliance market is maturing

The compliance market is not wrong to start with inventory, classification and gap analysis, those are necessary prerequisites. What is missing, and what the regulators and the Act itself point toward, is the next maturity step: evidence generated by the system, at the moment of action, by architectural design.

This is where aiGUARD sits. Our patented execution-control architecture (GB2603184.9) sits structurally between AI output generation and delivery. It evaluates each output against runtime parameters, confidence, consequence, user state, and enforces a mandatory permission before delivery. Our Governance Execution Certificate (GB2607087.0) issues cryptographic per-inference evidence that governance was applied. Not policy about governance. Evidence of governance.

The Act does not mandate cryptographic per-output certificates. It does require the operational properties that such certificates most robustly satisfy.

Bottom line

The EU AI Act is a well-designed piece of law. The compliance market's first response has been to package it as a readiness programme. The Act's actual text asks for more: a risk management system that operates, logs that are technically generated, and transparency that equips real interpretation.

Operational proof is not a burden beyond the Act. It is the natural implementation pattern for what the Act already requires.

Whatever the enforcement date, the architecture of compliance is the same.

— Christopher Hamilton