Connect with us

Ethics

Researchers Develop Algorithms Aimed At Preventing Bad Behaviour in AI

mm

Published

 on

Researchers Develop Algorithms  Aimed At Preventing Bad Behaviour in AI

Along with all the advancements and advantages artificial intelligence has exhibited so far, there were also reports of undesirable side effects like racial and gender bias in AI. So as sciencealert.com poses the question “how can scientists ensure that advanced thinking systems can be fair, or even safe?”

The answer may be laying the report by researchers at Stanford and the University of Massachusetts Amherst, titled Preventing undesirable behavior of intelligent machines. As eurekaalert.org notes in its story about this report, AI is now starting to handle sensitive tasks, so “policymakers are insisting that computer scientists offer assurances that automated systems have been designed to minimize, if not completely avoid, unwanted outcomes such as excessive risk or racial and gender bias.”

The report this team of researchers presented “outlines a new technique that translates a fuzzy goal, such as avoiding gender bias, into the precise mathematical criteria that would allow a machine-learning algorithm to train an AI application to avoid that behavior.”

The purpose was, as Emma Brunskill, an assistant professor of computer science at Stanford and senior author of the paper points out “we want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems.”

The idea was to define “unsafe” or “unfair” outcomes or behaviors in mathematical terms. This would, according to the researchers, making it possible “to create algorithms that can learn from data on how to avoid these unwanted results with high confidence.”

The second goal was to “develop a set of techniques that would make it easy for users to specify what sorts of unwanted behavior they want to constrain and enable machine learning designers to predict with confidence that a system trained using past data can be relied upon when it is applied in real-world circumstances.”

ScienceAlert says that the team named this new system  ‘Seldonian’ algorithms, after the central character of Isaac Asimov’s famous Foundation series of sci-fi novels. Philip Thomas, an assistant professor of computer science at the University of Massachusetts Amherst and first author of the paper notes, “If I use a Seldonian algorithm for diabetes treatment, I can specify that undesirable behavior means dangerously low blood sugar or hypoglycemia.” 

“I can say to the machine, ‘While you’re trying to improve the controller in the insulin pump, don’t make changes that would increase the frequency of hypoglycemia.’ Most algorithms don’t give you a way to put this type of constraint on behavior; it wasn’t included in early designs.”

Thomas adds that “this Seldonian framework will make it easier for machine learning designers to build behavior-avoidance instructions into all sorts of algorithms, in a way that can enable them to assess the probability that trained systems will function properly in the real world.”

For her part, Emma Brunskill also notes that “thinking about how we can create algorithms that best respect values like safety and fairness is essential as society increasingly relies on AI.”

Spread the love

Deep Learning Specialization on Coursera

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Artifical Neural Networks

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

mm

Published

on

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

Just recently, Google announced the creation of a new cloud platform intended to make gaining insight into how an AI program renders decisions, making debugging a program easier and enhancing transparency. As reported by The Register, the cloud platform is called Explainable AI, and it marks a major attempt by Google to invest in AI explainability.

Artificial neural networks are employed in many, perhaps most, of the major AI systems employed in the world today. The neural networks that run major AI applications can be extraordinarily complex and large, and as a system’s complexity grows it becomes harder and harder to intuit why a particular decision has been made by the system. As Google explains in their white paper, as AI systems become more powerful, they also become more complex and hence harder to debug. Transparency is also lost when this occurs, which means that biased algorithms can be difficult to recognize and address.

The fact that the reasoning which drives the behavior of complex systems is so hard to interpret often has drastic consequences. In addition to making it hard to combat AI bias, it can make it extraordinarily difficult to tell spurious correlations from genuinely important and interesting correlations.

Many companies and research groups are exploring how to address the “black box” problem of AI and create a system that adequately explains why certain decisions have been made by an AI. Google’s Explainable AI platform represents its own bid to tackle this challenge. Explainable AI is comprised of three different tools. The first tool a system that describes which features have been selected by an AI and it also displays an attribution score which represents the amount of influence that a particular feature has on the final prediction. Google’s report on the tool gives an example of predicting how long a bike ride will last based on variables like rainfall, current temperature, day of the week, and start time. After the network renders the decision, feedback is given that displays which features had the most impact on the predictions.

How does this tool provide such feedback in the case of image data? In this case, the tool produces an overlay that highlights the regions of the image that weighted most heavily on the rendered decision.

Another tool found in the toolkit is the “What-If” tool, which displays potential fluctuations in model performance as individual attributes are manipulated. Finally, the last tool enables can be set up to give sample results to human reviewers on a consistent schedule.

Dr. Andrew Moore, Google’s chief scientist for AI and machine learning, described the inspiration for the project. Moore explained that around five years ago the academic community started to become concerned about the harmful byproducts of AI use and that Google wanted to ensure their systems were only being used in ethical ways. Moore described an incident where the company was trying to design a computer vision program to alert construction workers if someone wasn’t wearing a helmet, but they become concerned that the monitoring could be taken too far and become dehumanizing. Moore said there was a similar reason that Google decided not to release a general face recognition API, as the company wanted to have more control over how their technology was used and ensure it was only being used in ethical ways.

Moore also highlighted why it was so important for AI’s decision to be explainable:

“If you’ve got a safety critical system or a societally important thing which may have unintended consequences if you think your model’s made a mistake, you have to be able to diagnose it. We want to explain carefully what explainability can and can’t do. It’s not a panacea.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading