Thought Leaders
The Architectural Shift Required to Govern AI Agents

AI is no longer just a chatbot that generates text. In enterprise environments, AI agents are taking actions such as retrieving sensitive data, triggering workflows, calling tools, and logging activity across systems. Autonomy changes the governance discussion entirely; controls and procedures initially designed for human users and traditional applications were not built to govern software that can execute multi-step actions at runtime.
The risk is not theoretical. Small gaps in visibility, access control, and auditability can compound quickly, turning into runtime failures that are difficult to detect and even harder to reverse.
To keep up with this new era, governing AI agents cannot be done by adding more policy documents. It requires governance by design: an architectural approach in which controls are embedded in the control plane and enforced continuously at runtime. If agents are going to act like digital colleagues, they must inherit the same enterprise guardrails as humans, plus stronger runtime oversight.
Why governance breaks in the era of convergence
Enterprise architecture has entered an era of convergence. Data and workloads now span multiple clouds, private data centers, and edge environments.
There are organizations that run their platforms in parallel systems because they have multiple processes to manage simultaneously. This includes separate identity systems, logging pipelines, catalogs, and approved processes. The result is what some call a “Frankenstein platform,” where integration overhead increases with every new tool or cloud environment. In fact, this fragmentation is showing up in everyday reality.
According to a recent survey, 47% of respondents cite complicated access requirements and processes, and 44% cite limited visibility into where data resides as barriers to using data effectively.
This is exactly where agents expose the seams between systems.
In order to answer a business question, an agent may have to pull data from an on-premises ERP system, a cloud CRM, operational telemetry in another cloud, and documents in a collaboration suite. If the organization enforces policy differently in each place, the agent will either fail or, worse, succeed in ways you cannot explain or control.
This is the moment when enterprise leaders must pay attention. Agents are forcing a higher bar that demands consistency across environments and accountability at runtime.
Governance, for this reason, is being pulled into the spotlight by regulators and security agencies. An example of this is the NIST AI Risk Management Framework, which emphasizes risk management across the AI lifecycle, not just at build time. It is a reminder that compliance and trust are operational responsibilities, not one-time checklists.
From policy to platform
Governance by design means that governance travels with the workload rather than being reimplemented in every silo. In practice, this depends on three building blocks:
-
A unified control plane
One place to define and enforce identity, access, policy, catalogs, and entitlements across clouds and data centers.
The goal is to write policies once and enforce them wherever data and models run, rather than rebuilding control systems system by system. This prevents agent behavior drift, where the same agent behaves safely in one environment but dangerously in another.
A practical test is simple: if a user cannot access a column, verify that an agent acting on their behalf cannot access it either. This should indicate whether or not the written policies are being enforced across the plane.
-
A data fabric grounded in open standards
Agents need context to operate. When that context is spread across different structures owned by different teams, a data fabric helps standardize semantics and access patterns, so agents do not have to learn a new set of rules for each dataset.
Open table formats like Apache Iceberg support this by allowing multiple engines to share the same governed data without copying it into a new silo. This is important because data duplication is where governance usually fails. Once teams start copying “just what the agent needs,” you have created a new, less governed environment.
If agents can operate across datasets without introducing new permission gaps, governance is working as intended.
-
Real-time observability and lineage
Agents are only governable if you can see what they are doing at runtime.
Observability here is not just a “nice-to-have,” but is the foundation for runtime controls and incident response.
Specifically, there needs to be end-to-end proof of agent actions. Agents should be able to prove actions, such as which data was accessed and which tools were called, and from there, lineage can connect outputs back to inputs. This allows teams to audit those decisions and troubleshoot failures, if needed, thereby proving overall compliance.
Treat agents like “digital colleagues”
One of the most useful mental models is treating agents as digital colleagues.
Here’s a comparison that breaks this down: just as employees have access badges that grant entry to some buildings and rooms, but not others, governance allows agents to have access with restrictions. One key addition is that agents must be situationally aware of what they are allowed to reveal.
Consider a support agent. It may need to access prior support cases to solve a problem, but it can’t leak another customer’s private details while doing so. Put differently, the agent can use restricted knowledge to reason, but still needs to enforce disclosure boundaries. This is not a “prompt-writing” problem that we’ve historically known how to navigate; instead, it is an identity and runtime enforcement problem.
What changes in 2026: agents move from experiments to production
2026 is the year when experiments end, and agents take the production seat.
This shift forces enterprises to operate at two speeds. One is innovation speed, where teams test new models, tools, and agent workflows to gain a competitive advantage. And the other is the secure speed, where systems must meet compliance and operational requirements, which can include those strict access controls and blind spots.
Without a set architectural governance, these two speeds will conflict.
If teams deploy these agents before they are governed, there will be a patchwork of one-off controls and operational failures. And if the opposite occurs, you get a failure mode in which security blocks everything, and innovation moves to shadow IT, undermining governance.
The objective is not to pick a speed. It is to build an architecture that supports both.
A practical checklist for governing agents at runtime
- If you are building or scaling agents, it is imperative to ask yourself the following questions to reveal if governance is truly architectural: Can you explain, end-to-end, what data an agent accessed to produce an answer or take an action?
- Are access decisions consistent across hybrid environments, or do they differ by platform?
- Do you have telemetry for agent actions, including tool calls, policy checks, and human escalations?
- Can you throttle, pause, or quarantine an agent at runtime if it behaves unexpectedly?
- Do you have a post-deployment monitoring plan that aligns with your regulatory obligations and risk appetite?
If you cannot answer these, treat your agent deployment like a production incident waiting to happen.
The governance shift needs to be architectural, or else it does not exist
Agents will become a standard addition to enterprise operations. The question is whether they will become a reliable part of enterprise operations.
If agents are not governed at least as confidently as humans and mission-critical software, the consequences will be real. We’ll see those repercussions in data leakage, compliance failures, operational outages, and loss of trust in AI programs.
Leaders need to stop treating agent governance as a documentation exercise. As platform capabilities expand, agent governance should be one of those that takes on oversight of other roles. This means embedding controls in the control plane, making actions observable and decisions auditable. And then scale.
That is how you get agents that move fast without breaking the enterprise.












