Connect with us

Artificial Intelligence

The AI Crisis Comms Arms Race: When Your Chatbot Goes Rogue, What’s the New Playbook?

mm

With the internet as it exists now, brands are competing to integrate AI-powered chatbots to streamline customer interaction and business. But with higher intelligence comes unprecedented danger. When an AI chatbot fails, publishing off-color, deceptive, or libelous material, the damage can be done swiftly and selfishly. The issue is: How do brands react when their AI goes wrong?

The New Frontier of Brand Risk

Recent events hint at this potential threat. In February 2024, Air Canada faced legal repercussions when its AI-powered chatbot provided a customer with incorrect information regarding the airline’s bereavement fare policy. The chatbot erroneously informed the passenger that he could apply for a bereavement discount retroactively, which contradicted the airline’s actual policy. When the customer sought the discount post-travel, Air Canada denied the request, leading to a dispute. The British Columbia Civil Resolution Tribunal ruled in favor of the passenger, ordering Air Canada to compensate him and uphold the discount. This case underscores the potential liabilities companies face when AI systems disseminate inaccurate information, emphasizing the need for robust oversight and accountability mechanisms.

Likewise, Meta is facing criticism over its AI-based digital companions. An investigation by The Wall Street Journal found that some of its chatbots were having sex-based discussions with users pretending to be kids. This revelation has resulted in serious reputation issues for the company, bringing up ethical questions and the need for AI-related guardrails.

OpenAI’s ChatGPT hasn’t been without controversy either. One recent update made the chatbot overly amiable, even confirming harmful or delusional thoughts users uttered. This agreeableness overload, intended to increase interaction, raised ethical issues about AI impacting user behavior and emotional validation. OpenAI took note of the issue and rolled back the update, but the incident points out the thin line between user engagement and ethical AI behavior.

Accountability in the Age of AI

These events pose basic questions about accountability. When a human ambassador makes a mistake, the road to redemption is simple: apologize, apologize, and forget. However, when an AI system is responsible, things become confusing. Who is at fault, the creators, the organization employing the AI, or the AI itself?

Brands must be careful and acknowledge that applying AI does not relieve them of responsibility. Consumers equate the voice of the chatbot to the voice of the brand. Therefore, any small mistake by the AI is reflected on the brand’s reputation. It’s important for firms to have strict guidelines for monitoring AI and to be prepared to take swift action when the scenario turns negative.

The transparency expectations surrounding AI are rapidly evolving. Consumers, regulators, and journalists are demanding clarity on how automated systems are trained, deployed, and governed. In this environment, silence or deflection is not an option. Brands need to proactively communicate their AI governance practices and be ready to provide a human response when an AI error occurs. Crisis protocols must now include specific AI-related contingencies, such as establishing clear lines of ownership for AI-generated outputs and ensuring that human oversight is always part of the process. Simply blaming “the algorithm” is not a strategy; it is an excuse that erodes trust.

Additionally, companies must recognize that AI-driven mistakes often spread faster than traditional ones, fueled by viral social media amplification and the public’s fascination with technology missteps. A single screenshot of a rogue chatbot interaction can reach millions within hours. This heightens the need for constant AI monitoring and rapid escalation paths. Communications teams should conduct scenario planning specific to AI failures, develop templated responses, and align legal, compliance, and engineering teams around a shared understanding of accountability. In this new landscape, reputational resilience depends not only on how a brand responds to AI crises, but also on how transparently it prepares for them in the first place.

Preventive Steps for AI Crisis Management

In order to thrive in a complex AI scenario, brands can employ the following steps:

  • Implement Robust Monitoring Systems: Routinely audit AI output to identify and correct objectionable content in a timely manner. For example, SeekOut, a talent intelligence platform, conducts regular audits of its AI systems to ensure fair and unbiased outcomes. In response to evolving regulations and as part of its commitment to responsible AI, SeekOut engaged third-party auditor Credo AI to evaluate its algorithms. The audit assessed the performance of AI features across various demographic groups, verifying that search results for job titles were representative and equitable. This proactive approach enables SeekOut to identify and rectify potential biases promptly, maintaining the integrity and fairness of its AI-driven services.
  • Create Clear Accountability Frameworks: Determine who in the organization will handle AI monitoring and crisis management. For example, The U.S. Government Accountability Office released an AI accountability framework emphasizing governance, data, performance, and monitoring. It provides practices for federal agencies to ensure responsible AI use, including setting clear goals and engaging diverse stakeholders.
  • Create AI-Specific Crisis Response Plans: Generic crisis management plans might not be adequate. Customize plans to respond to AI-specific crises, such as shutting down AI systems when needed. The United Nations Development Programme utilizes AI-driven Crisis Risk Dashboards to monitor and predict potential crises, such as hate speech and violence. These dashboards enable proactive responses by analyzing real-time data and forecasting risks.
  • Practice Honest Communication: When an AI error is made, practice honesty in communication to the stakeholders regarding the error, actions taken to rectify the error, and procedures put in place to avoid repeat incidents. For example, in 2018, Amazon discontinued its AI recruiting tool after discovering it was biased against female candidates. The company acknowledged the issue and ceased using the tool, demonstrating transparency in addressing AI shortcomings.
  • Spend on Moral Training of AI: Make sure the AI models are trained with diverse and inclusive datasets in a bid to suppress bias and offensiveness. In this vein, researchers at the University of Washington and the Allen Institute for AI developed Delphi, an AI system designed to make ethical judgments. While it shows promise, Delphi sometimes reflects societal biases, highlighting the challenges in training AI with diverse and inclusive datasets.

As artificial intelligence becomes more deeply integrated into brand communications, the risk of missteps grows. While AI offers valuable efficiencies, it also introduces unique challenges that brands must be prepared to manage. By proactively implementing control measures and tailored crisis response plans, organizations can safeguard their reputation and maintain consumer trust in this evolving digital landscape.

Ronn Torossian is the Founder & Chairman of 5W Public Relations, one of the largest independently-owned PR firms in the United States. Since founding 5WPR in 2003, he has led the company's growth and vision, with the agency earning accolades including being named a Top 50 Global PR Agency by PRovoke Media, a top three NYC PR agency by O'Dwyers, one of Inc. Magazine's Best Workplaces and being awarded multiple American Business Awards, including a Stevie Award for PR Agency of the Year.