Thought Leaders
Why Banning AI Raises Security Risks and How Institutions Should Respond

Across the U.S., school districts and public-sector organizations are moving to restrict or block access to generative AI (GenAI) technologies or specific tools. In Colorado, the Boulder Valley School District recently banned ChatGPT on district networks, citing concerns about misuse, safety, and academic integrity.
The instinct to reduce exposure to security incidents or data misuse is understandable. Platforms with weak guardrails or unclear privacy commitments, like DeepSeek, warrant restriction and scrutiny. But banning access to GenAI tools doesn’t meaningfully reduce risk; it often just shifts it into environments where oversight disappears.
A College Board survey found 84% of high school students reported using GenAI for schoolwork, even as 45% of principals reported at least some restrictions on AI access at school. Similarly, an IBM report found that 80% of office workers use AI, but only 22% rely exclusively on employer-provided tools.
Access policies alone aren’t determining behavior. Students can pull out their phones and use any AI tool over their cellular data network, or use the platform while at home or on public wifi. They can also use VPNs, remote desktops, and plugins to bypass restrictions. Employees can do the same to get around workplace controls.
Organizations should assume that when there’s a will to use AI, there’s a way. When the technology is limited in a way that pushes use beyond institutional visibility, the risk of shadow AI escalates. There’s no oversight into what information is entered into prompts or what data is retained by the model. Any control over security is immediately gone.
Beyond the risk of shadow AI, bans create a literacy gap, leaving students fully unprepared to use the technology that will be a large part of their future. These tools are increasingly embedded in search engines, business platforms, productivity suites, and personal devices. A Pew Research survey found that 62% of U.S. adults say they interact with AI at least several times a week. It’s all but guaranteed that students and employees will encounter GenAI systems regardless of institutional policies.
In this environment, education is the most reliable safeguard against concerns about misuse or security, and to ensure that students and workers alike are well-prepared to use a tool that will be integral for their careers. Teaching responsible and ethical use equips users to recognize data risks and make informed decisions wherever they encounter these systems.
Education programs should focus on how large language models (LLMs) process and retain data, how to identify hallucinations, how to verify AI-generated outputs, and how to identify phishing campaigns and AI-generated images, just to name a few. Teach users to be skeptical. AI outputs are often presented confidently and in polished language, which can create an illusion of authority. Without training, users may assume that a well-formatted answer is inherently accurate.
The ability to question digital content is a frontline defense as deepfakes and AI-enhanced phishing campaigns become more sophisticated. A survey from Gartner found that 62% of organizations experienced a deepfake attack last year, and 32% faced an attack on AI applications. The frequency and scope of these incidents are only expected to keep rising.
Public-sector institutions such as schools and local governments are particularly exposed to deepfake-enabled social engineering because so much of their activity is recorded and publicly available. Audio snippets from public meetings can be manipulated and used to generate convincing voice calls. We’ve seen threat actors use this for fraud, such as redirecting funds during sensitive transactions. While this happens most often in targeted cases, users who have never been trained to recognize these techniques, or even know that they are possible, are at a disadvantage from the outset.
Following education, organizations should have clearly communicated policies regarding AI usage and governance. These should define approved tools, acceptable use cases, and what data can and cannot be input into which model(s). Policies need to be applied consistently across departments rather than varying from classroom to classroom or office to office. Clear expectations reduce ambiguity and reinforce accountability.
Instead of blanket restrictions, organizations should look to shape how the technology is used in practice. When an organization endorses a tool that is accessible, secure, and works well, it becomes the default for most users. Casual shadow AI usage declines because there is a straightforward alternative that doesn’t include downloading a VPN to use it.
Organizations and institutions are looking to provide secure access to LLMs in a way in which their data is secured and not shared or used for training. A growing category of AI enablement and security tools is emerging to do exactly that. They can provide access to multiple LLMs while keeping the institution’s data securely containerized. Zero data retention agreements provide the legal framework to ensure that the organization’s data remains the property of the company or institution, and the LLM cannot be trained on it. Additionally, if an employee leaves the institution, any AI usage, workflows, or data remains the property of the organization.
Technical guardrails can also be applied at the feature level. An institution might allow students or employees to ask general questions within a sanctioned LLM while disabling file uploads, document sharing, or other high-risk capabilities. These configurations preserve productivity benefits without opening the door to uncontrolled data exposure.
More advanced tools can automatically anonymize sensitive information before it ever reaches a model. For example, replacing patient names or identifiers with neutral placeholders so doctors and nurses can still use GenAI without exposing protected data. Others integrate data loss prevention controls that detect and block social security numbers, financial account data, or other regulated information from being submitted in prompts.
Clear policies with technical guardrails, built on a foundation of education, create the best defense, especially as technology changes so quickly. GenAI is evolving faster than most public-sector organizations – and their budgets – can adapt. Attempting to block each new model as it emerges is unsustainable because by the time one platform is restricted, another has gained traction. Users trained to understand the underlying risks can adapt across tools and versions.
Security in an AI-enabled environment depends on acknowledging the reality that GenAI is now embedded into daily life. There’s no putting the genie back into the bottle. Blanket bans may signal caution, but they often trade visible, manageable risk for invisible, unmanaged exposure. Teach a person to use AI responsibly, and they’ll be set for whatever comes next.












