stub AI Researchers Propose Putting Bounties on AI Bias to Make AI More Ethical - Unite.AI
Connect with us

Ethics

AI Researchers Propose Putting Bounties on AI Bias to Make AI More Ethical

mm

Published

 on

A team of AI researchers from companies and AI development labs like Intel, Google Brain, and OpenAI has recommended the use of bounties to help ensure the ethical use of AI. The team of researchers recently released a number of proposals regarding ethical AI usage, and they included a suggestion that rewarding people for discovering biases in AI could be an effective way of making AI fairer.

As VentureBeat reports, researchers from a variety of companies throughout the US and Europe joined up to put together a set of ethical guidelines for AI development, as well as suggestions for how to meet the guidelines. One of the suggestions the researchers made was offering bounties to developers who find bias within AI programs. The suggestion was made in a paper entitled “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims”.

As examples of the biases that the team of researchers hope to address, biased data and algorithms have been found in everything from healthcare applications to facial recognition systems used by law enforcement. One such occurrence of bias is the PATTERN risk assessment tool that was recently used by the US Department of Justice to triage prisoners and decide which ones could be sent home when reducing prison population sizes in response to the coronavirus pandemic.

The practice of rewarding developers for finding undesirable behavior in computer programs is an old one, but this might be the first time that an aI ethics board has seriously advanced the idea as an option for combating AI bias. While it’s unlikely that there are enough AI developers to find enough biases that AI can be ensured ethical, it would still help companies reduce overall bias and get a sense of what kinds of bias are leaking into their AI systems.

The authors of the paper explained that the bug-bounty concept can be extended to AI with the use of bias and safety bounties and that proper use of this technique could lead to better-documented datasets and models. The documentation would better reflect the limitations of both the model and data. The researchers even note that the same idea could be applied to other AI properties like interpretability, security, and privacy protection.

As more and more discussion occurs around the ethical principles of AI, many have noted that principles alone are not enough and that actions must be taken to keep AI ethical. The authors of the paper note that “existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.” The co-founder of Google Brain and AI industry leader Andrew Ng also opined that guiding principles alone lack the ability to ensure that AI is used responsibly and fairly, saying many of them need to be more explicit and have actionable ideas.

The bias bounty hunting recommendation of the combined research team is an attempt to move beyond ethical principles into an area of ethical action. The research team also made a number of other recommendations that could spur ethical action in the AI field.

The research team made a number of other recommendations that companies can follow to make their AI usage more ethical. They suggest that a centralized database of AI incidents should be created and shared among the wider AI community. Similarly, the researchers propose that an audit trail should be established and that these trails should preserve information regarding the creation and deployment of safety-critical applications in AI platforms.

In order to preserve people’s privacy, the research team suggested that privacy-centric techniques like encrypted communications, federated learning, and differential privacy should all be employed. Beyond this, the research team suggested that open source alternatives should be made widely available and that commercial AI models should be heavily scrutinized. Finally, the research team suggests that government funding be increased so that academic researchers can verify hardware performance claims.