stub How Human Bias Undermines AI-Enabled Solutions - Unite.AI
Connect with us

Thought Leaders

How Human Bias Undermines AI-Enabled Solutions

mm

Published

 on

Last September, world leaders like Elon Musk, Mark Zuckerberg, and Sam Altman, OpenAI’s CEO, gathered in Washington D.C. with the purpose of discussing, on the one hand, how the public and private sectors can work together to leverage this technology for the greater good, and on the other hand, to address regulation, an issue that has remained at the forefront of the conversation surrounding AI.

Both conversations, often, lead to the same place. There is a growing emphasis on whether we can make AI more ethical, evaluating AI as if it were another human being whose morality was in question. However, what does ethical AI mean? DeepMind, a Google-owned research lab that focuses on AI, recently published a study in which they proposed a three-tiered structure to evaluate the risks of AI, including both social and ethical risks. This framework included capability, human interaction, and systemic impact, and concluded that context was key to determine whether an AI system was safe.

One of these systems that has come under fire is ChatGPT, which has been banned in as many as 15 countries, even if some of those bans have been reversed. With over 100 million users, ChatGPT is one of the most successful LLMs, and it has often been accused of bias. Taking DeepMind’s study into consideration, let’s incorporate context here. Bias, in this context, means the existence of unfair, prejudiced, or distorted perspectives in the text generated by models such as ChatGPT. This can happen in a variety of ways–racial bias, gender bias, political bias, and much more.

These biases can be, ultimately, detrimental to AI itself, hindering the odds that we can harness the full potential of this technology. Recent research from Stanford University has confirmed that LLMs such as ChatGPT are showing signs of decline in terms of their ability to provide reliable, unbiased, and accurate responses, which ultimately is a roadblock to our effective use of AI.

An issue that lies at the core of this problem is how human biases are being translated to AI, since they are deeply ingrained in the data that is used to develop the models. However, this is a deeper issue than it seems.

Causes of bias

It is easy to identify the first cause of this bias. The data that the model learns from is often filled with stereotypes or pre-existing prejudices that helped shape that data in the first place, so AI, inadvertently, ends up perpetuating those biases because that is what it knows how to do.

However, the second cause is a lot more complex and counterintuitive, and it puts a strain on some of the efforts that are being made to allegedly make AI more ethical and safe. There are, of course, some obvious instances where AI can unconsciously be harmful. For example, if someone asks AI, “How can I make a bomb?” and the model gives the answer, it is contributing to generating harm. The flip side is that when AI is limited–even if the cause is justifiable–we are preventing it from learning. Human-set constraints restrict AI’s ability to learn from a broader range of data, which further prevents it from providing useful information in non-harmful contexts.

Also, let’s keep in mind that many of these constraints are biased, too, because they originate from humans. So while we can all agree that “How can I make a bomb?” can lead to a potentially fatal outcome, other queries that could be considered sensitive are way more subjective. Consequently, if we limit the development of AI on those verticals, we are limiting progress, and we are fomenting the utilization of AI only for purposes that are deemed acceptable by those who make the regulations regarding LLM models.

Inability to predict consequences

We have not completely understood the consequences of introducing restrictions into LLMs. Therefore, we might be causing more damage to the algorithms than we realize. Given the incredibly high number of parameters that are involved in models like GPT, it is, with the tools we have now, impossible to predict the impact, and, from my perspective, it will take more time to understand what the impact is than the time it takes to train the neural network itself.

Therefore, by placing these constraints, we might, unintendedly, lead the model to develop unexpected behaviors or biases. This is also because AI models are often multi-parameter complex systems, which means that if we alter one parameter–for example, by introducing a constraint–we are causing a ripple effect that reverberates across the whole model in ways that we cannot forecast.

Difficulty in evaluating the “ethics” of AI

It is not practically feasible to evaluate whether AI is ethical or not, because AI is not a person that is acting with a specific intention. AI is a Large Language Model, which, by nature, cannot be more or less ethical. As DeepMind’s study unveiled, what matters is the context in which it is used, and this measures the ethics of the human behind AI, not of AI itself. It is an illusion to believe that we can judge AI as if it had a moral compass.

One potential solution that is being touted is a model that can help AI make ethical decisions. However, the reality is that we have no idea about how this mathematical model of ethics could work. So if we don’t understand it, how could we possibly build it? There is a lot of human subjectivity in ethics, which makes the task of quantifying it very complex.

How to solve this problem?

Based on the aforementioned points, we cannot really talk about whether AI is ethical or not, because every assumption that is considered unethical is a variation of human biases that are contained in the data, and a tool that humans use for their own agenda. Also, there are still many scientific unknowns, such as the impact and potential harm that we could be doing to AI algorithms by placing constraints on them.

Hence, it can be said that restricting the development of AI is not a viable solution. As some of the studies I mentioned have shown, these restrictions are partly the cause of the deterioration of LLMs.

Having said this, what can we do about it?

From my perspective, the solution lies in transparency. I believe that if we restore the open-source model that was prevalent in the development of AI, we can work together to build better LLMs that could be equipped to alleviate our ethical concerns. Otherwise, it is very hard to adequately audit anything that is being done behind closed doors.

One superb initiative in this regard is the Baseline Model Transparency Index, recently unveiled by Stanford HAI (which stands for Human-Centered Artificial Intelligence), which assesses whether the developers of the ten most widely-used AI models divulge enough information about their work and the way their systems are being used. This includes the disclosure of partnerships and third-party developers, as well as the way in which personal data is utilized. It is noteworthy to say that none of the assessed models received a high score, which underscores a real problem.

At the end of the day, AI is nothing more than Large Language Models, and the fact that they are open and can be experimented with, instead of steered in a certain direction, is what will allow us to make new groundbreaking discoveries in every scientific field. However, if there is no transparency, it will be very difficult to design models that really work for the benefit of humanity, and to know the extent of the damage that these models could cause if not harnessed adequately.

Ivan Nechaev is an Angel Investor and Mediatech Advisor with 60+ deals and 15+ successful exits. He invests in early-stage MediaTech, AI, Telecom, BioTech, EdTech, and SaaS startups and serves on the boards of Brainify.ai and TrueClick.ai. Nechaev is also VP at the American industrial group Access Industries with over $35B+ value and investments in 30+ countries.