stub Computer Scientists Tackle Bias in AI - Unite.AI
Connect with us

Artificial Intelligence

Computer Scientists Tackle Bias in AI

Updated on

Computer scientists from Princeton and Stanford University are now addressing problems of bias in artificial intelligence (AI). They are working on methods that result in fairer data sets containing images of people. The researchers work closely with ImageNet, which is a database of more than 13 million images. Throughout the past decade, ImageNet has helped advance computer vision. With the use of their methods, the researchers then recommended improvements for the database. 

ImageNet includes images of objects, landscapes, and people. Researchers that create machine learning algorithms that classify images use ImageNet as a source of data. Because of the database's massive size, it was necessary for there to be automated image collection and crowdsourced image annotation. Now, the ImageNet team works to correct biases and other issues. The images often contain people that are unintended consequences of ImageNet’s construction.

Olga Russakovsky is the co-author and an assistant professor of computer science at Princeton. 

“Computer vision now works really well, which means it's being deployed all over the place in all kinds of contexts,” he said. “This means that now is the time for talking about what kind of impact it's having on the world and thinking about these kinds of fairness issues.”

In the new paper, the ImageNet team systematically identified non-visual concepts and offensive categories. These categories included racial and sexual characterizations, and the team proposed removing them from the database. The team has also developed a tool that allows users to specify and retrieve image sets of people, and it can do so by age, gender expression, and skin color. The goal is to create algorithms that more fairly classify people’s faces and activities in images. 

The work done by the researchers was presented on Jan. 30 at the Association for Computing Machinery’s Conference on Fairness, Accountability, and Transparency in Barcelona, Spain. 

“There is very much a need for researchers and labs with core technical expertise in this to engage in these kinds of conversations,” said Russakovsky. “Given the reality that we need to collect the data at scale, given the reality that it's going to be done with crowdsourcing because that's the most efficient and well-established pipeline, how do we do that in a way that's fairer — that doesn't fall into these kinds of prior pitfalls? The core message of this paper is around constructive solutions.”

ImageNet was launched in 2009 by a group of computer scientists at Princeton and Stanford. It was meant to serve as a resource for academic researchers and educators. The creation of the system was led by Princeton alumni and faculty member Fei-Fei Li. 

ImageNet was able to become such a large database of labeled images through to the use of crowdsourcing. One of the main platforms used was the Amazon Mechanical Turk (MTurk), and workers were paid to verify candidate images. This caused some problems, and there were many biases and inappropriate categorizations. 

Lead author Kaiyu Yang is a graduate student in computer science. 

“When you ask people to verify images by selecting the correct ones from a large set of candidates, people feel pressured to select some images and those images tend to be the ones with distinctive or stereotypical features,” he said. 

The first part of the study involved filtering out potentially offensive or sensitive person categories from ImageNet. Offensive categories were defined as those that contained profanity or racial or gender slurs. One such sensitive category was the classification of people based on sexual orientation or religion. Twelve graduate students from diverse backgrounds were brought in to annotate the categories, and they were instructed to label a category sensitive if they were unsure of it. About 54% of the categories were eliminated, or 1,593 out of the 2,932 person categories in ImageNet. 

MTurk workers then rated the “imageability” of the remaining categories on a scale of 1 to 5. 158 categories were classified as both safe and imageable, rating 4 or higher. These filtered set of categories included more than 133,000 images, which can be highly useful for training computer vision algorithms. 

The researchers studied the demographic representation of people in the images, and the level of bias in ImageNet was assessed. Sourced content from search engines often provide results that overrepresent males, light-skinned people, and adults between the ages of 18 and 40.

“People have found that the distributions of demographics in image search results are highly biased, and this is why the distribution in ImageNet is also biased,” said Yang. “In this paper we tried to understand how biased it is, and also to propose a method to balance the distribution.”

The researchers considered three attributes that are also protected under U.S. anti-discrimination laws: skin color, gender expression, and age. The MTurk workers then annotated each attribute of each person in an image. 

The results showed that ImageNet’s content has a considerable bias. The most underrepresented were dark-skinned, females, and adults over the age of 40.

A web-interface tool was designed that allows users to obtain a set of images that are demographically balanced in a way that the user chooses. 

“We do not want to say what is the correct way to balance the demographics, because it's not a very straightforward issue,” said Yang. “The distribution could be different in different parts of the world — the distribution of skin colors in the U.S. is different than in countries in Asia, for example. So we leave that question to our user, and we just provide a tool to retrieve a balanced subset of the images.”

The ImageNet team is now working on technical updates to its hardware and database. They are also trying to implement the filtering of the person categories and the rebalancing tool developed in this research. ImageNet is set to be re-released with the updates, along with a call for feedback from the computer vision research community. 

The paper was also co-authored by Princeton Ph.D. student Klint Qinami and Assistant Professor of Computer Science Jia Deng. The research was supported by the National Science Foundation.  

 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.