Connect with us

Thought Leaders

When AI Misuse Triggers a Corporate Crisis

mm

Most companies lack policies to prevent reputational disasters from artificial intelligence tools

Recent surveys reveal only about 30 percent of companies have established AI policies. Meanwhile, 77 percent of employees share company secrets on ChatGPT, according to the 2025 Enterprise AI and SaaS Data Security Report by LayerX. This combination creates perfect conditions for reputational crises.

Most AI policies that do exist tend to focus on technical and compliance risks. They address data security protocols, vendor assessments, and regulatory requirements. What these policies often miss is a public relations disaster that can unfold within hours due to AI misuse. At Red Banyan, we counsel an increasing number of organizations through AI-related crises, and we begin to see a common pattern. The technical breach is usually contained quickly, but the damage to reputation, client relationships, and stakeholder trust can persist for months or even years.

The Shadow AI Problem

The biggest threat comes from what security professionals call “Shadow AI.” This is where employees use unapproved, personal AI accounts for work tasks, bypassing corporate security controls. Most of the time, they do so without fully realizing the risks.

According to Cyberhaven research, 11 percent of data employees put into ChatGPT is confidential. That figure should alarm every CIO and communications leader. We’re talking about source code, client contracts, unreleased product roadmaps, financial projections, and employee records flowing into systems that organizations don’t control.

How Data Exposure Happens

Software engineers can turn to AI tools like Claude or ChatGPT to debug or optimize pieces of code. In 2023, Samsung software engineers did just that – they uploaded internal source code to ChatGPT while attempting to debug issues. The leak of confidential code forced Samsung to implement comprehensive restrictions on AI usage across all engineering departments.

Marketing professionals and HR staff often use AI to polish their writing. They upload draft proposals, internal policy documents, and sometimes even client agreements, asking the AI to improve clarity or fix grammatical errors. These documents frequently contain financial projections, legal terms, or strategic plans that competitors would find valuable.

Customer service teams experimenting with AI efficiency tools sometimes input actual customer conversations and support tickets. They want the AI to summarize interactions or suggest better responses. When these inputs include customer names, contact information, account details, or purchase history, the company has potentially violated privacy regulations like GDPR or CCPA.

Product development teams brainstorming new features may describe unannounced capabilities to ChatGPT, hoping the AI will help refine their ideas or identify potential problems. These descriptions can reveal competitive advantages, technological innovations, or market strategies the company intended to keep confidential until launch.

All this information could potentially be retained by large language models and inform AI responses to other users in the future.

For example, after the product development team has described their upcoming product to ChatGPT, a competitor or journalist could ask the AI tool about that company’s upcoming releases. ChatGPT might reference the confidential information that was shared and spill the beans on a new product or technology the company spent considerable resources developing.

How This Becomes a PR Crisis

When these incidents surface, they rarely stay contained as IT issues. Here’s what typically happens:

The breach gets discovered, often by accident or through a third-party alert. IT begins investigating while trying to assess the scope. Meanwhile, if the incident involves client data or regulated information, legal obligations require disclosure. Once disclosed, media coverage begins. Social media amplifies the story. Clients start calling with concerns. Employees worry about job security and their own liability.

Within 24 to 48 hours, what started as a technical incident has become a full reputation crisis. The company needs to explain to multiple audiences how this happened, why controls failed, and what’s being done to prevent recurrence. If the company hasn’t prepared for this scenario, the response is often slow, inconsistent, or defensive. Each misstep extends the crisis and deepens the damage.

Building a Crisis Response Framework for AI Incidents

CIOs need to partner with communications and legal teams to build crisis response protocols specifically for AI incidents. Technical controls and policies matter, but they’re insufficient without a plan for managing the media fallout when something goes wrong.

Here are six actionable steps to start this process:

  1. Establish clear escalation paths. When an AI-related data exposure is discovered, who gets notified immediately? IT, Legal, Communications, and the C-suite should all be looped in quickly. Create a decision tree that determines when to activate crisis protocols based on the type and sensitivity of exposed data.
  2. Develop response templates. Pre-draft holding statements and Q&A documents for common AI mishap scenarios. These should address employee misuse, vendor security issues, and accidental data exposure. Having templates ready allows for faster, more consistent responses when time is critical.
  3. Train spokespeople. Executives and communications staff need media training specifically focused on how to discuss AI incidents. The technology is complex and full of jargon, which could be difficult to navigate when answering questions from stakeholders.
  4. Monitor for early warning signs. Social media monitoring should include keywords related to your organization and AI tools. Sometimes the first indication of a problem comes from an employee posting on LinkedIn or a customer complaining on Twitter about an AI-generated response.
  5. Conduct crisis simulations. Tabletop exercises that walk through an AI data exposure scenario help teams understand their roles and identify gaps in the response plan. These simulations should involve IT, Legal, Communications, HR, and executive leadership.
  6. Build relationships before you need them. Establish connections with crisis communications firms, cybersecurity experts who can provide third-party validation, and legal counsel experienced in AI-related issues. When a crisis hits, you want trusted advisors who can mobilize immediately.

The Path Forward

The gap between AI adoption and AI governance continues to grow. Employees have easy access to powerful tools that can create significant reputational risks. CIOs have traditionally focused on the technology side of AI risk management. However, the reputational dimension of AI incidents requires equal attention and preparation.

The question isn’t whether your organization will face an AI-related crisis. Given current adoption rates and the prevalence of Shadow AI, the question is when. Companies that prepare now, with policies that address both technical and reputational risks, will weather these incidents far better than those caught unprepared.

Vlad Drazdovich is Vice President of Performance Improvement and Analytics at Red Banyan, a strategic communications and crisis management agency. As part of his role, Vlad advises on corporate AI strategy and oversees AI-based technology implementations at the firm.