Connect with us

Thought Leaders

New Security Exposures of Rapid GenAI Adoption that Organizations Must Address

mm mm

Generative AI (GenAI) has catapulted from a curiosity to a central force in enterprise technology. Its ability to generate text, code, images, and insights on demand has made it indispensable for employees eager to cut through complexity and accelerate productivity. But with this innovation and efficiency comes massive exposure to risk.

In calls with executives and AI governance leaders across industries, one theme surfaces again and again: Data security has moved from a key concern to the focal point of their strategy and is now the defining challenge of AI adoption. Unlike traditional software or even past waves of machine learning, GenAI fundamentally changes the process for securing data within an organization.

A recent MIT study found that 95% of enterprise GenAI pilots are failing. It’s not because the technology is weak; it’s because enterprises lack the governance and security frameworks needed to operationalize GenAI appropriately and responsibly. In another MIT study, enterprise leaders cited data security as the top business and security risks hindering faster AI adoption. In addition, “shadow AI,” which is unsanctioned employee use of public tools, is widely recognized as a driver of skyrocketing data risks beyond corporate control.

Least-privilege access is a security model in which any entity, whether a user, program, or process, is granted only the minimum level of access and permissions necessary to perform its legitimate functions. GenAI, however, upends the entire paradigm: Least privilege itself becomes a constraint that conflicts with the way these systems are designed to operate. This is because enterprise GenAI tools tend to deliver higher productivity gains when they have access to more business data and business context.

As GenAI adoption accelerates, users continue to discover new applications of GenAI, most of which emerge from organic experimentation and curiosity, rather than top-down, business-driven planning. If an entity cannot define the tasks GenAI will be used for, or the types of data it needs access to, it becomes infeasible to set up least-privilege access permissions. In addition, a user may have appropriate access to a dataset and legitimately provide it as input to a GenAI tool, but once that data is ingested, it is no longer bound by the user’s original permissions. Instead, it can be absorbed into the model, surfaced in future outputs, or made accessible to others using the same tool. Since GenAI does not necessarily inherit the data’s access controls, it effectively renders least privilege unenforceable.

GenAI Exposures to Consider

GenAI creates a vast and ever-expanding data surface, complicating enterprise data governance and security in several interconnected ways. These include:

Input leakage – GenAI can ingest data in its raw form, including text, images, audio, video, and structured data. End users can now direct GenAI tools to new datasets with minimal effort or expertise. Instead of being limited to carefully curated, structured tables with defined schemas and relationships, these datasets may include sales call recordings, CRM email notes, customer service transcripts, and more. In practice, employees are feeding prompts with highly sensitive business information, including customer PII, intellectual property, financial forecasts, and even source code.

Output exposureGenerative models don’t just consume, they synthesize. A prompt can unintentionally draw insights from across datasets and expose them to users without proper clearance. In some cases, outputs can even “hallucinate” data that appears legitimate but contains fragments of real, highly sensitive training material.

GenAI tools perform better when they have context for the task at hand. As a result, not only is GenAI ingesting existing information, but users are also creating new data to guide it in the form of extensive, detailed prompts that document business context, internal processes, and other potentially sensitive or business-critical information.

Accessibility without oversightTraditional enterprise systems required vendor onboarding and IT provisioning. Today GenAI is embedded everywhere—in Microsoft Office suites, browsers, chat tools, and SaaS platforms. Employees can adopt it instantly, bypassing governance entirely. This frictionless access fuels “shadow AI,” and every unsanctioned use of GenAI is a potential data exfiltration event happening invisibly, at scale, and outside an enterprise’s governance perimeter.

Second-tier supply chain risk – A vendor may appear secure, but they often rely on subcontractors such as cloud hosts, annotation services, or third-party AI labs. Each introduces its own end user license agreements (EULAs) and policies. Sensitive enterprise data can ripple through multiple unseen hands, yet accountability remains squarely with the enterprise. For example, an enterprise might have a vendor that previously completed its onboarding process, but that vendor now uses a GenAI tool that could allow the enterprise’s data to be used as training data, with significant downstream impacts.

Governance gaps in training data – Once data enters an AI model, control effectively ends. Enterprises cannot easily retract or govern how their information is used. Proprietary knowledge may persist and then surface in outputs long after its source has been forgotten. We have yet to encounter any GenAI tool that allows requests to remove information it has ingested, similar to what is seen in privacy regulations such as the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA). Implementation of such processes is unlikely until regulation drives the change.

Application code risk – AI is increasingly writing the code that underpins business systems. Developers who use GenAI tools like Microsoft Copilot to generate code may unknowingly introduce insecure dependencies, propagate vulnerabilities, or embed code under conflicting open-source licenses. Once deployed, these weaknesses become embedded in the software supply chain.

Addressing GenAI Risk

GenAI is already embedded in enterprise workflows, so the question for enterprises is not whether to adopt it but how to do so responsibly. Adopting GenAI without governance risks costly breaches, regulatory penalties, and reputational damage. But blocking it only drives employees to use unsanctioned solutions. The only way forward is enablement wrapped in visibility and control.

GenAI governance requires context-driven visibility not only into what data an enterprise has, where it lives, and who has access to it, but also into how GenAI is used. Enterprises need to see which tools are being accessed, what prompts are being entered, and whether sensitive data is leaving their environment. From there, they can apply the appropriate controls to monitor prompts and outputs in real time, flag risky sessions or anomalous data flows, block unsanctioned tools, filter sensitive prompts before they leave, de-identify sensitive data as it is entered into prompts, and enforce role-based restrictions on AI-driven insights.

GenAI is a whole new layer of enterprise risk and opportunity. Managing it requires the mindset that security is not a brake on innovation but the foundation that makes it safe.

Dr. Shashanka is Chief Scientist and Co-Founder for Concentric. Before joining Concentric, Dr. Shashanka served as Managing Director for Charles Schwab's Data Science and Machine Learning team. He co-founded and was the Chief Scientist for PetaSecure before their purchase by Niara.

Lane Sullivan serves as the Senior Vice President and Chief Information Security and Strategy Officer at Concentric AI, leading the company's global cybersecurity program and influencing product strategy to enhance enterprise data security and AI governance. Previously, Lane held the position of Senior Vice President and Chief Information Security Officer at Magellan Health, focusing on compliance within a highly regulated environment. Experience also includes directing a multi-million-dollar cybersecurity program at Ingram Content Group, and providing infrastructure leadership at C&S Wholesale Grocers. Lane's leadership roles span back to JT Investments, where operations and technology were managed, and Basin Home Health & Hospice Inc., where significant advancements in healthcare IT were achieved. Lane's educational background includes a Master's degree in Computer and Information Systems Security from Western Governors University, complementing a Bachelor's degree in IT Management from the same institution.