On 31 March 2026, the Digital Regulation Cooperation Forum — the cooperation forum of the Competition and Markets Authority, Financial Conduct Authority, Information Commissioner's Office and Ofcom — published The Future of Agentic AI.

It is expressly a non-binding foresight paper. That caveat matters and deserves to be respected upfront. No new law has been proposed. No specific technical architecture has been mandated. The paper itself says it "should not be taken as an indication of current or future policy by any member regulator."

What the paper does is identify, with unusual precision for a document of this kind, the governance mechanisms the DRCF sees as important design and oversight considerations as AI systems move from generating responses to taking actions.

Those considerations are worth reading carefully.

From tool to actor

The paper's framing is the shift from AI as a tool (that responds to prompts) to AI as an actor (that plans, uses tools, holds memory, and acts in external environments). Agents set goals, or have goals set for them. They access data. They call APIs. They execute payments. They interact with other agents.

Each of those capabilities creates control points that didn't exist when AI was a model behind a chat window.

The control surface

Across its governance discussion, the DRCF paper points to a concrete control surface, including:

Some of these — identity and access management, logs of activity, human operator override, and regular performance and safety checks — are the DRCF's exact terms, named together in a single passage. The others are fair paraphrases drawn from the paper's wider governance discussion.

This is not a mandate, and it is not formal policy. But it is a clear map of the governance surface the DRCF is focusing on as agentic AI moves from generating responses to taking actions.

Four regulators, four lenses

Each of the four DRCF member regulators brings its own statutory entry point to the same set of concerns.

The CMA has been explicit: businesses remain responsible for what an AI agent does on their behalf. UK consumer law applies whether decisions are made by humans or AI. The CMA's separate guidance of 9 March 2026 walks businesses through the practical implications for disclosure, training, oversight and monitoring.

The FCA's current approach is principles-based and outcomes-focused. The second cohort of its AI Live Testing service opened for applications on 19 January 2026, with testing due to begin in late April. The Mills Review is examining the longer-term impact of AI on retail financial services. Neither programme has produced an agentic-AI-specific rulebook; both signal that existing Consumer Duty, permissions, and senior manager accountability frameworks continue to apply.

The ICO has been clearest that design choices shape data protection outcomes. Its Tech Futures paper on agentic AI, published earlier in 2026, explicitly identifies agentic controls as a design consideration. On 31 March 2026 the ICO opened a consultation on draft guidance about automated decision-making, including profiling; a statutory Code of Practice on AI and ADM remains in preparation. Records, decision rationales, DPIAs, and traceable logs are expected to be in evidence, not just in policy.

Ofcom is integrating agentic AI into its online safety and telecoms statutory remits. For user-to-user or search services, the Online Safety Act's duties may apply depending on how the agent is used.

The through-line is not a new regulatory architecture. It is a coordinated insistence that existing regimes apply, and that organisations must be able to demonstrate their application.

Saying "the agent did it" is not a defence. Saying "we wrote a policy" is not a demonstration.

What this means in practice

Three things follow if you read the paper as an operational design signal rather than abstract horizon scanning.

First, the governance surface is moving from the model to the action. The question "did the AI produce a good response" is being replaced by "was this action authorised, logged, overridable and attributable, before it happened?"

Second, organisational responsibility cannot be outsourced to the agent. The CMA's March guidance makes clear that businesses remain responsible for what their AI agents do, in the same way they are responsible for what their employees do. The short version: "the agent did it" is not a defence.

Third, written policies alone will not be enough. The ICO's accountability approach points toward evidence — records, DPIAs, decision rationales, traceable logs, and meaningful human involvement where required. Policy documents without operational proof will not satisfy it.

The maturity step for agentic AI is not more documentation. It is more operational proof. The DRCF's control surface is a useful early checklist for that proof.

Where aiGUARD sits

aiGUARD is a patent-pending, architecturally enforced execution-control layer for AI outputs. Its underlying execution-control architecture is the subject of UK patent applications GB2603184.9 (Thames Sentinel) and GB2607087.0 (Governance Execution Certificate). It sits structurally between AI output generation and delivery, and enforces identity-bound permissions, action-level logs, human override triggers, and per-action cryptographic evidence.

aiGUARD was designed around the same classes of control the DRCF paper surfaces: permissioning, runtime enforcement, logging, human intervention, action-level traceability and auditability. That is not a claim of regulatory certification, legal compliance, or DRCF endorsement.

The DRCF has not reviewed, approved or endorsed aiGUARD. The point is alignment of control architecture with the direction of regulatory concern, not certification against a DRCF standard.

A signal, not a specification

The DRCF paper is, strictly, a non-binding foresight paper. But taken alongside the CMA's consumer-law position, the ICO's ADM work, the FCA's live testing, and Ofcom's emerging statutory lens, it is a useful early indication of where UK regulatory concern on accountable agentic AI is focused.

Businesses treating it only as abstract horizon scanning may miss the operational direction of travel. Businesses turning it into evidence, controls and auditability will be better prepared.

— Christopher Hamilton