Thought Leaders
How Should Enterprises Address AI Application Overload?

Not-so-Personal Computing
For years, technological development has been driven by the desire to make computing more accessible. From smartphones to tablets, laptops, and wearable tech, there is a general trend of making computers and their capabilities increasingly personal. Just consider the nomenclature of industry cornerstones: the “Personal Computer,” or PC; the “i” Pod, Pad, Phone, and beyond. Each of these devices is meant to act as a personal companion, helping streamline the user’s digital experience, and by extension their lives.
The development of artificial intelligence (AI)—particularly large language models (LLMs) and agentic solutions—is headed largely in the same direction. Like the PC and iPhone, AI tools like ChatGPT have made computing activities like searching, editing, and brainstorming more accessible to the general public. But as this accessibility has driven increased usage, it has also increased risk.
Whether through poisoned connectors or unintentional search engine indexing, the threats to private AI usage are becoming increasingly difficult to protect against. Add in uncertainties around who is using which tools, and businesses face a perfect storm of growing threats and inadequate protection. Given the promise of AI tools and the uncertainty of how to secure them, what’s a technology-forward enterprise to do?
The Growth of Unsanctioned AI Apps
With over 700 million daily users in 2025, ChatGPT usage has grown more than 4x year-over-year. Even so, only five million of these users are paying business customers. This makes the other 695 million or so members of their user base either personal or unsanctioned business users. This number seems large until you remember that ChatGPT is not the only AI solution in the market. In fact, it’s far from it—the contemporary explosion of AI and machine learning advancement has generated hundreds of thousands of new models and solutions, both public and private.
For businesses, the exponential growth of AI solutions has given rise to a new dimension of the cyber threat landscape known as “Shadow AI.” Like the phenomenon of “Shadow IT,” this concept is based in employee usage of AI solutions without the prior vetting and express approval of their organization. This completely bypasses all IT and security oversight, leading to scenarios in which employees could be using potentially dangerous or insecure solutions for sensitive work.
Shadow AI usage is both pervasive and expanding. One report found that over 320 unsanctioned AI apps are being used on average per enterprise. Each of these unsanctioned apps, and their usage by employees, expands the threat landscape further.
The Inherent Risks of AI Overload
The threats associated with AI overload are multifaceted and, unfortunately, continually expanding. Some, like data poisoning, prompt injection, model manipulation, and data scraping, are common knowledge. These direct attempts at AI manipulation and exploitation can enable bad actors with access to sensitive enterprise data and internal systems. Given this risk, enterprises are beginning to actively protect against such attacks.
But businesses can only protect themselves against the threats they know. Think about securing a medieval castle. Walls, watchtowers, gates, moats, and more help secure the castle against outside enemies and invaders. But what if the kitchen staff have a secret passage that bypasses the outer walls? Or a loudmouth guard is overheard discussing shift changes at the local tavern? Any of these unknown threats could help thieves bypass security and breach the inner walls.
Shadow AI has the same effect on enterprise data security. Even the well-intentioned use of unsanctioned LLMs and copilots can spell disaster for sensitive data protection, as any insecure solution may serve as gateways for bad actors. Any LLM usage inherently heightens the risk of issues like data retention, prompt injection, or model bias, each of which requires its own relevant security measures. This heightens the risks of data breaches, compliance violations and operational failures, all of which are often only identified after the damage has already been done.
Establishing Control Over AI Solutions
As AI usage continues to grow on both the personal and enterprise level, it’s crucial that organizations establish control over these solutions before adoption grows beyond their control. Securing enterprise AI and protecting against unsanctioned AI risks involves:
- Gaining comprehensive visibility. As mentioned, you can’t secure your data against threats that you’re unaware of. Security begins with oversight across all of the AI solutions that your employees are using—both sanctioned and unsanctioned. This is easier said than done, but a cloud access security broker (CASB) solution can help establish continuous visibility into employee application usage.
- Conducting thorough risk assessments. Teams should conduct TPRM efforts to ensure that all AI solutions are evaluated for their security and potential risks. These efforts should also extend to the fourth-party level, as even sanctioned applications like CRM or productivity platforms are beginning to embed external LLM capabilities into their platforms. These fourth-party solutions must adhere to the same security standards as any internally approved application.
- Establishing governance frameworks. With this oversight across and understanding of their solutions, organizations can develop and deploy consistent AI governance policy frameworks throughout their application ecosystem. Like with data access, these governance policies can help control the kinds of AI solutions employees are able to access, the information they’re able to share with them, and the transparency of their day-to-day usage.
- Automating risk detection. The exponential growth of AI adoption makes manual risk detection and response near impossible. Automating these processes based on governance policies and expected user behavior can help to proactively identify anomalies and limit their broader security impact.
- Explaining employee expectations. The most important part of secure AI adoption and usage are your employees. Providing adequate training and setting clear expectations will help to simultaneously prevent risky AI usage and enforce the safe, business-driving usage of sanctioned and protected solutions.
Sustaining Secure AI Adoption
By establishing these kinds of proactive security measures, businesses gain a more holistic view of the ways they’re using AI solutions—whether sanctioned or in the “shadows.” This has an immediate impact on security, helping teams to identify existing risks and threat vectors and take action to remedy them.
More importantly, however, it sets these businesses up for long-term AI success. By creating the technical foundation for secure AI adoption and a security-first culture among AI users, this multi-pronged approach helps create a sustainable model for future AI tools. Enabled with technical support and adequate training, employees can reap AI’s benefits without being saddled by its inherent risks.
Organizations that make these changes sooner rather than later will be best prepared to meet the future of AI solutions, whatever they may bring. Putting in the front-end work to create a strong foundation—rather than simply rushing to check the box of AI adoption—will put teams in the best possible position for sustained AI-driven success.












