stub Concerns Over Potential Risks of ChatGPT Are Gaining Momentum but Is a Pause on AI a Good Move? - Unite.AI
Connect with us

Thought Leaders

Concerns Over Potential Risks of ChatGPT Are Gaining Momentum but Is a Pause on AI a Good Move?

mm

Published

 on

While Elon Musk and other global tech leaders have called for a pause in AI following the release ChatGPT, some critics believe a halt in development is not the answer. AI evangelist Andrew Pery, of intelligent automation company ABBYY believes that taking a break is like putting the toothpaste back in the tube. Here, he tells us why…

AI applications are pervasive, impacting virtually every facet of our lives. While laudable, putting the brakes on now may be implausible.

There are certainly palpable concerns calling for increased regulatory oversight to reign in its potential harmful impacts.

Just recently, Italian Data Protection Authority temporarily blocked the use of ChatGPT nationwide due to privacy concerns related to the manner of collection and processing of personal data used to train the model, as well as an apparent lack of safeguards, exposing children to responses “absolutely inappropriate to their age and awareness.”

The European Consumer Organisation (BEUC) is urging the EU to investigate potential harmful impacts of large-scale language models given “concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them.”

In the US, the Center for AI and Digital Policy has filed a complaint with the Federal Trade Commission that ChatGPT violates section 5 of the Federal Trade Commission Act (FTC Act) (15 USC 45). The basis of the complaint is that ChatGPT allegedly fails to meet the guidance set out by the FTC for transparency and explainability of AI systems. Reference was made to ChatGPT’s acknowledgements of several known risks including compromising privacy rights, generating harmful content, and propagating disinformation.

The utility of large-scale language models such as ChatGPT notwithstanding research points out its potential dark side. It is proven to produce incorrect answers, as the underlying ChatGPT model is based on deep learning algorithms that leverage large training data sets from the internet. Unlike other chatbots, ChatGPT uses language models based on deep learning techniques that generate text similar to human conversations, and the platform “arrives at an answer by making a series of guesses, which is part of the reason it can argue wrong answers as if they were completely true.”

Furthermore, ChatGPT is proven to accentuate and amplify bias resulting in “answers that discriminate against gender, race, and minority groups, something which the company is trying to mitigate.” ChatGPT may also be a bonanza for nefarious actors to exploit unsuspecting users, compromising their privacy and exposing them to scam attacks.

These concerns prompted the European Parliament to publish a commentary which reinforces the need to further strengthen the current provisions of the draft EU Artificial Intelligence Act, (AIA) which is still pending ratification. The commentary points out that the current draft of the proposed regulation focuses on what is referred to as narrow AI applications, consisting of specific categories of high-risk AI systems such as recruitment, credit worthiness, employment, law enforcement and eligibility for social services.  However, the EU draft AIA regulation does not cover general purpose AI, such as large language models that provide more advanced cognitive capabilities and which can “perform a wide range of intelligent tasks.” There are calls to extend the scope of the draft regulation to include a separate, high-risk category of general-purpose AI systems, requiring developers to undertake rigorous ex ante conformance testing prior to placing such systems on the market and continuously monitor their performance for potential unexpected harmful outputs.

A particularly helpful piece of research draws awareness to this gap that the EU AIA regulation is “primarily focused on conventional AI models, and not on the new generation whose birth we are witnessing today.”

It recommends four strategies that regulators should consider.

  1. Require developers of such systems to regularly report on the efficacy of their risk management processes to mitigate harmful outputs.
  2. Businesses using large-scale language models should be obligated to disclose to their customers that the content was AI generated.
  3. Developers should subscribe to a formal process of staged releases, as part of a risk management framework, designed to safeguard against potentially unforeseen harmful outcomes.
  4. Place the onus on developers to “mitigate the risk at its roots” by having to “pro-actively audit the training data set for misrepresentations.”

A factor that perpetuates the risks associated with disruptive technologies is the drive by innovators to achieve first mover advantage by adopting a “ship first and fix later” business model. While OpenAI is somewhat transparent about the potential risks of ChatGPT, they have released it for broad commercial use with a “buyer beware” onus on users to weigh and assume the risks themselves. That may be an untenable approach given the pervasive impact of conversational AI systems. Proactive regulation coupled with robust enforcement measures must be paramount when handling such a disruptive technology.

Artificial intelligence already permeates nearly every part of our lives, meaning a pause on AI development could imply a multitude of unforeseen obstacles and consequences. Instead of suddenly pumping the breaks, industry and legislative players should collaborate in good faith to enact actionable regulation that is rooted in human-centric values like transparency, accountability, and fairness. By referencing existing legislation such as the AIA, leaders in the private and public sectors can design thorough, globally standardized policies that will prevent nefarious uses and mitigate adverse outcomes, thus keeping artificial intelligence within the bounds of improving human experiences.

Andrew Pery is an AI Ethics Evangelist at intelligent automation company ABBYY. Pery has more than 25 years of experience spearheading product management programs for leading global technology companies. His expertise is in intelligent document process automation and process intelligence with a particular expertise in AI technologies, application software, data privacy and AI ethics. He holds a Master of Law degree with Distinction from Northwestern University Pritzker School of Law and is a Certified Data Privacy Professional.