Thought Leaders
Data— Not AI— Is the Key

Artificial intelligence has become so deeply ingrained in businesses that nearly every operation has been impacted by the technology in some way. And looking at the use of AI specifically, we are seeing organizations diving in with newer forms of AI to innovate and iterate on existing systems. In fact, a recent survey of IT leaders found that 98% are either already using agentic AI to orchestrate GenAI use cases or plan to do so in the near future.
Amid the explosion of AI tools and technologies that have arrived over the last few years, AI agents are quickly becoming one of the most popular. These agents help organizations do anything and everything—from improving customer experience and support to automating internal processes or optimizing existing GenAI models already in use. But scaling the many benefits of AI agents, and AI at large, across an entire enterprise is not without difficulty.
The reason many organizations struggle with AI, and AI agents in particular, at scale comes down to trust, not technology. AI agents, by nature, work across a multitude of systems. Wherever those systems are, it’s more than likely that they depend on highly sensitive data—whether that’s a huge volume of customer data, medical information, or banking and financial data. This is where the problem lies. Pulling in massive amounts of data into any AI model, without the proper data privacy and security infrastructure, leaves enterprises with a significant amount of risk.
No matter the output of an AI model, it is only worthwhile if the data that trained it can be trusted. But it’s about far more than just ensuring data is secured. Especially with AI agents, there is a great deal of autonomy involved in how these models operate. Ensuring they are equipped with an understanding of who should access data, when it should be accessed, and how, is critical for building trust.
Overcoming data privacy complications is not impossible, though. With the right data policies, metadata governance, APIs, and enterprise-grade authorization frameworks in place, enterprise IT leaders can ensure the data that fuels their AI is secure and trustworthy.
Let’s take a closer look.
Navigating Data Privacy and the Need for AI at Scale
One of the broader goals of integrating AI agents into an enterprise is to streamline workflows across operations and systems. But doing so without any guardrails could inadvertently expose sensitive data along the way. At a time when data breaches and nefarious attacks are constantly evolving, any data that gets exposed or accessed by unauthorized users could spell disaster—not just for an AI initiative, but the entire enterprise. The average cost of a data breach is upward of $4 million as of 2025, according to IBM. AI adoption is accelerating fast, oftentimes leaving governance and security in the dust as enterprise leaders push for more innovation, deeper insights, and new opportunities for growth. But even as AI adoption soars, regulatory policies and requirements are evolving to keep pace and ensure data remains secure.
From GDPR to the CCPA and even longstanding policies like HIIPA, regulatory complications pose a complex challenge for scaling AI agents. AI tools that require vast amounts of data, when left unchecked, invite an increased amount of risk. As AI models reach across all these internal systems, sensitive data is often moved and accessed in the process. When it comes to data, regulatory agencies worldwide are putting greater emphasis on ensuring privacy, effective governance, and robust security.
More recent policies like DORA—a set of guidelines on managing ICT risk for financial services firms operating in the EU—explicitly require ICT incident classification and reporting, including those that impact confidentiality, integrity, or availability of data. And while this policy has a primary emphasis on operational resilience, the implications stretch to AI adoption too. As more AI initiatives, including those with AI agents, tap into data at enterprise scale, the risk of unauthorized access grows. Should an AI project result in the loss or exposure of data, regulations like these would quickly become relevant.
With so much at stake, it’s important that enterprise organizations not lose sight of just how important security, governance, and data access are.
Building the Foundation to Fuel AI Agents
Enterprises need to build a foundation that is rooted in effective governance, with firm guardrails and enforceable rules that define what agents can and cannot do. At the heart of this foundation lies data governance—the high-level policies, standards, and structures that manage how data is used responsibly across the organization. These policies ensure agents don’t overstep their roles, whether by accessing restricted data sets or initiating processes without human oversight.
Implementing a robust data governance policy should start with a few key points. This includes accountability and ownership, data quality and consistency, security and privacy, compliance and auditability, and transparency and traceability.
With these points as the underpinning foundation of governance, enterprise leaders gain greater control over decision-making, more trust of their data, and reduced regulatory risk posed by data silos. This is done by tapping into capabilities like metadata management, data classification, and lineage to boost transparency and visibility into who, or what AI tools, can access. Each of these mechanisms allow enterprises to trace where data originates, how it flows, and how it’s transformed.
Tech is Important, but Trust is Paramount
Anytime a new AI model or innovation breaks onto the scene, adoption soars. But with any AI initiative, risks emerge—albeit not always where one might think. The technical challenges that often hinder adoption of new tools is not always the culprit behind slow AI integration. Oftentimes it boils down to data. Specifically, trust in that data and concerns around privacy. Because AI moves so quickly, it can sometimes be a challenge to ensure that things like access controls, data governance, lineage, and compliance keep up with that pace.
Governance is an important part of trust, but what it also requires is effective evaluations. Especially within agentic AI, there is still a major gap in standardized evaluations, yet they’re essential for proving systems that behave reliably and safely.
Whether you want to optimize the performance of internal systems, improve fraud detection, or just make the customer experience for clients smoother, the best AI agents, and AI initiatives at large, are all built on a foundation of trusted data, privacy, and security.












