Thought Leaders
Shining a light on Shadow AI

Across industries, workers are increasingly leveraging Artificial intelligence (AI) tools to transform everyday operations. Whether it’s marketing professionals using ChatGPT to draft campaigns or software engineers writing code with generative tools, AI is quietly integrating into nearly every aspect of business. In fact, MIT’s Project NANDA found that in more than 90% of organizations, employees are using personal chatbot accounts to support their daily work, frequently without IT oversight, however, only 40% of companies have official subscriptions to large language model (LLM) tools. This disconnect between sanctioned and secretive AI use has opened up a new security blind spot: shadow AI.
Without oversight, sensitive data can be shared with external models, outputs can be inaccurate or biased, and compliance teams lose visibility into where and how information is being processed. As employees continue to experiment with tools outside of approved channels, organizations need to reimagine their strategy for managing AI, or these risks will continue to bubble to the surface. For CIOs, banning AI tools altogether is not the solution, as it could impact competitive capability. Instead, the focus should be on adaptable guardrails that balance innovation with responsible risk management.
To stay one step ahead of AI-related risks, organizations should establish AI policies that promote responsible adoption. These guidelines should encourage employees to use AI within reason, and any policy set forth must be aligned with the company’s values and risk appetite. To achieve a successful program, organizations must move beyond outdated governance models and legacy tools that don’t detect or track AI usage across their business.
While establishing an AI policy is an important step, the real crux is to ensure that they are designed to be practical, enforceable, and aligned with the innovative vision and compliance needs of each business, without hindering their progress.
Here are four steps organizations can take:
1. Determine the best framework
Businesses don’t have to start from scratch. Several leading institutions have already developed frameworks which offer valuable reference points. Resources from the Organization for Economic Co-Operation and Development (OECD), the National Institute of Standards (NIST), and ISO/IEC, can provide guidance on managing AI safely and effectively. Additionally, governmental bodies are beginning to step in. New regulations from the European Union (EU) are setting expectations around transparency, accountability, and governance. Together, these resources will help provide a strong foundation for businesses to build and refine their own frameworks to help guide responsible AI adoption.
2. Increase AI visibility
As organizations chart their path toward effective AI risk management, it’s essential that security leaders develop a firm understanding of how AI is actually being used across their business. They should invest in tools that help provide visibility into activity across functions. These tools can monitor and access user behavior, and will leave no stone unturned when it comes to identifying where generative AI is being used. This insight is vital when it comes to finding shadow AI use, and helping leaders evaluate risks and address them.
3. Form an AI Council
Once visibility is established, security leaders can use it to inform new policies. Since AI use affects the entire business, they should not silo the decision-making process. CISOs should consider forming an AI council that brings in key stakeholders from across security, legal, IT, and the C-suite. In fact, Gartner reports that 55% of organizations have already established an AI board, while 54% have appointed a head of AI or dedicated AI leader to oversee strategy and implementation.
These cross-functional groups can evaluate both the risks and opportunities presented by AI tools, whether authorized or not, that are beginning to infiltrate their business environments. They can then shape new policies together that address their businesses’ needs and balance risk.
For instance, if the council identifies a popular AI tool being used without approval that poses security concerns, it can recommend prohibiting that application while approving a safer alternative. These policies should be reinforced by investments in security controls and approved AI platforms that meet compliance standards. As new advancements come to market, the council can also introduce a formal process for employees to propose tools for vetting, helping the organization stay adaptive and compliant with its AI strategy.
4. Strengthen AI education
Educating and engaging with employees around AI will be another essential part of securing organization-wide buy-in and dampening shadow AI use. Even with the right governance models in place, success relies on how well employees understand and embrace them.
In one Pew Research survey, among workers who had received on the job training (51%), only a quarter (24%) said it was related to AI use. Organizations should ensure they have AI training programs in place, as they can help employees understand the opportunities and functionalities available to them. Increased upskilling can also help them grow in their career and feel more satisfied in their position.
AI training programs should have a focus beyond basic use and compliance to help teams grasp the broader context of AI and how it operates in their businesses’ ecosystem. Topics to cover include why responsible AI use matters, what policies exist to protect both the business and its customers, and how to identify potential data handling or privacy risks. When employees are equipped with the right knowledge, they are empowered to become active partners in innovating safely.
Shaping the future of responsible AI
Shadow AI isn’t disappearing anytime soon. As generative technologies become further embedded into everyday workflows, the challenges for organizations will only intensify. Leaders now face the choice of whether to treat shadow AI as an uncontrollable threat or recognize it as an opportunity to maximize efficiency and productivity, when properly managed.
Developing an AI policy or board is not a one-time act. Just as AI is rapidly evolving, so too must an organizations strategy. AI can help guide smarter decisions, accelerate innovation, and enhance productivity across every function. But to sustain those gains, employees must understand what tools are available to them and how to use them responsibly. Those that succeed will remain agile, proactively refining their policies to stay ahead of change, not chase it.








