Thought Leaders
The Danger of No to AI

I dialed into a potential client’s company call last week. All the security and networking folks were there, plus management. Different presenters covering the topics of the moment. Then someone started explaining how his technical teams use AI in their daily work.
The presenter was excited. He’d found a way to compress three days of manual copy-and-paste work down to a couple of minutes using AI tools. Super cool, right? I get what he was doing—and why he was doing it. This is the promise of AI!
But as the security person in the room, I’m thinking: “Oh my God.” “What tool is that? Who owns that tool?” Because he’s putting customer information into these platforms—pricing data. Maybe financial information? Whatever it might be. All to speed up his workflow.
So, I asked the obvious question: “Do you have an AI policy?” We went looking. Found a blurb buried on acceptable use. Something vague about AI tools. Does anybody actually read that? Probably not. You sign it during onboarding, and you’re done.
Your People Are Already Using AI
Here’s what I’ve learned talking to companies across every industry: AI is already everywhere, whether you acknowledge it or not. Recent research confirms this reality—75% of workers now use AI at work, nearly doubling in just six months.
Last week in Houston, I met a guy whose company drills for oil. They’ve got an AI platform that analyzes soil composition and weather patterns to optimize drilling locations. He explained how they factor in rainfall data—”We knew it rained 18 days more than it did in this area, so oil capacity is probably higher, which allows the drills to get in deeper.”
Meanwhile, I’m using AI to create an itinerary for Germany. I’ve talked to companies in healthcare using it for telehealth platforms. Finance teams running risk models. I even met someone running a lemonade business who’s leveraging AI somehow.
The point is: AI is in every industry, every workflow. It’s not coming—it’s here. And most of these organizations are in the same boat we were. They know their people are using it. They just don’t know how to manage it.
Shadow IT Gets Dangerous Fast
Do you know what happens when you tell people they can’t use AI? It’s exactly like telling your teenager never to drink. Congratulations, now they’re going to be the ones drinking in the most unsafe ways, because you’ve created a prohibition.
The numbers prove this reality: 72% of generative AI use in enterprises is shadow IT, with employees using personal accounts to access AI apps.
People go get their second laptop. They use their personal phone that’s not protected by company security. Then they use AI anyway. Suddenly you lose visibility and create the exact gaps you were trying to prevent.
I’ve seen this pattern before. Security teams that say “no” to everything end up driving people underground—and lose all oversight of what’s actually happening.
Security Becomes the Enemy
There’s this perception that security teams are “the mean dudes in the basement.” People think: “Oh, security’s probably going to say no anyway, so never mind those guys. I’m going to do it anyway.”
We’ve created that misunderstanding. And with AI, people are biased to assume we’ll shut them down, so they don’t bother asking. That kills the communication you actually want.
I had a developer come up to me after a conference presentation. He uses APIs every single day and had been trying to get his security team’s attention about testing and validation. But he’d decided against asking because he figured they’d just say no.
The Trust Deficit Costs You Everything
Here’s the conversation you want: someone from marketing comes up to you and says, “Hey, I want to use this AI tool. What’s our stance on it? How do I use it safely?” That’s the right way to partner with security.
We’re going to vet it. We understand AI is going to be part of the business. We’re not going to say no to it—we want people to use it safely. But we need that conversation first.
When people assume security will block everything, they stop asking. They sign up with personal emails, start feeding company data into unvetted platforms, and you lose all visibility and control. The result is predictable: 38% of employees share sensitive work information with AI tools without their employer’s permission.
Organizations are going to use AI no matter what. The question is: how do we make sure our employees can leverage it safely without jeopardizing company data, privacy, or security?
Start With Smart Yeses, Not Blanket Nos
Here’s what actually works: proactive communication. Send newsletters or run 30-minute webinars saying, “We’re allowing AI within the organization. Here’s how to do it safely.” Record the sessions for people who miss them.
Show examples of employees who partnered with security successfully. Make those partnerships visible instead of being a shadowy entity that people assume will shut them down.
You need to build trust over time between security and employees. At the end of the day, we’re all using AI in big and small ways. I used it to plan my Germany trip—told it to create a three-day itinerary and got a rocking trip out of it.
From an organizational standpoint, we need to recognize where AI sits in the business. Is it going to increase revenue? Probably. The business case is clear: AI has moved from ‘experiment’ to ‘essential,’ with enterprise spending jumping 130%. So, what do we do? How do we provide protection while enabling productivity?
The goal isn’t perfect control, that’s impossible. The goal is informed AI adoption with guardrails that actually work in practice. The new risk for security is becoming sequestered from AI usage—usage that’s happening with or without you.
For organizations serious about getting this right, experts recommend developing clear AI governance policies that balance business needs with security requirements before shadow AI usage spirals completely out of control.