Connect with us

Ethics

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

mm

Published

 on

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

Just recently, Google announced the creation of a new cloud platform intended to make gaining insight into how an AI program renders decisions, making debugging a program easier and enhancing transparency. As reported by The Register, the cloud platform is called Explainable AI, and it marks a major attempt by Google to invest in AI explainability.

Artificial neural networks are employed in many, perhaps most, of the major AI systems employed in the world today. The neural networks that run major AI applications can be extraordinarily complex and large, and as a system’s complexity grows it becomes harder and harder to intuit why a particular decision has been made by the system. As Google explains in their white paper, as AI systems become more powerful, they also become more complex and hence harder to debug. Transparency is also lost when this occurs, which means that biased algorithms can be difficult to recognize and address.

The fact that the reasoning which drives the behavior of complex systems is so hard to interpret often has drastic consequences. In addition to making it hard to combat AI bias, it can make it extraordinarily difficult to tell spurious correlations from genuinely important and interesting correlations.

Many companies and research groups are exploring how to address the “black box” problem of AI and create a system that adequately explains why certain decisions have been made by an AI. Google’s Explainable AI platform represents its own bid to tackle this challenge. Explainable AI is comprised of three different tools. The first tool a system that describes which features have been selected by an AI and it also displays an attribution score which represents the amount of influence that a particular feature has on the final prediction. Google’s report on the tool gives an example of predicting how long a bike ride will last based on variables like rainfall, current temperature, day of the week, and start time. After the network renders the decision, feedback is given that displays which features had the most impact on the predictions.

How does this tool provide such feedback in the case of image data? In this case, the tool produces an overlay that highlights the regions of the image that weighted most heavily on the rendered decision.

Another tool found in the toolkit is the “What-If” tool, which displays potential fluctuations in model performance as individual attributes are manipulated. Finally, the last tool enables can be set up to give sample results to human reviewers on a consistent schedule.

Dr. Andrew Moore, Google’s chief scientist for AI and machine learning, described the inspiration for the project. Moore explained that around five years ago the academic community started to become concerned about the harmful byproducts of AI use and that Google wanted to ensure their systems were only being used in ethical ways. Moore described an incident where the company was trying to design a computer vision program to alert construction workers if someone wasn’t wearing a helmet, but they become concerned that the monitoring could be taken too far and become dehumanizing. Moore said there was a similar reason that Google decided not to release a general face recognition API, as the company wanted to have more control over how their technology was used and ensure it was only being used in ethical ways.

Moore also highlighted why it was so important for AI’s decision to be explainable:

“If you’ve got a safety critical system or a societally important thing which may have unintended consequences if you think your model’s made a mistake, you have to be able to diagnose it. We want to explain carefully what explainability can and can’t do. It’s not a panacea.

Spread the love

Deep Learning Specialization on Coursera

Blogger and programmer with specialties in machine learning and deep learning topics. Daniel hopes to help others use the power of AI for social good.

Artifical Neural Networks

New Technique Lets AI Intuitively Understand Some Physics

mm

Published

on

New Technique Lets AI Intuitively Understand Some Physics

Artificial intelligence has been able to develop an understanding of physics through reinforcement learning for some time now, but a new technique developed by researchers at MIT could help engineers design models that demonstrate an intuitive understanding of physics.

Psychological research has shown that, to some extent, humans have an intuitive understanding of the laws of physics. Infants have expectations of how objects should interact and move, and violations of these expectations will have the infants react with surprise. The research conducted by the MIT team has the potential to not only drive new applications of artificial intelligence, but help psychologists understand how infants perceive and learn about the world.

The model designed by the MIT team is called ADEPT,  and it functions by making predictions about how objects should behave in a physical space. The model observes objects and keeps track of a “surprise” metric as it does so. If something unexpected happens, the model responds by increasing its surprise value. Unexpected and seemingly impossible actions such as an object teleporting or vanishing all together will see a dramatic rise in surprise.

The goal of the research team was to get their model to register the same levels of surprise that humans register when they see objects behaving in implausible ways.

ADEPT has two major components to it, a physics engine and an inverse graphics module. The physics engine is responsible for predicting how an object will move, predicting a future representation of an object, from a range of possible states. Meanwhile, the inverse graphics module is responsible for creating the representations of objects that will be fed into the physics engine.

The inverse graphics module tracks several different attributes such as velocity, shape, and orientation of an object, extracting this information from frames of videos. The inverse graphical module only focuses on the most salient details, ignoring details that won’t help the physics engine interpret the object and predict new states. By focusing on only the most important details, the model is better able to generalize to new objects. The physics engine then takes these object descriptions and simulates more complex physical behavior, like fluidity or rigidity, in order to make predictions about how the object should behave.

After this intake process occurs, the model observes the actual next frame in the video, which it uses to recalculate its probability distribution with respect to possible object behaviors. The surprise is inversely proportional to the probability than an event should occur, only registering great surprise when there is a major mismatch between what the model believes should happen next and what actually happens next.

The research team needed some way to compare the surprise of their model to the surprise of people observing the same object behavior. In developmental psychology, researchers often test infants by showing them two different videos. In one video, an object is presented that behaves as you would expect objects to in the real world, not spontaneous play vanishing or teleporting. In the other video and object violates the laws of physics in some fashion. The research team took these same basic concepts and had 60 adults watch 64 different videos of both expected and unexpected physical behavior. The participants were then asked to rate their surprise at various moments in the video on a scale of 1 to 100.

Analysis of the model’s performance demonstrated that it performed quite well on videos where an object was moved behind a wall and disappeared when the wall was removed, typically matching humans surprise levels in these cases. The model also appeared to be surprised by videos where humans didn’t demonstrate surprise but arguably should have. As an example, in order for an object to move behind a wall at a given speed and immediately come out on the other side of the wall, it must have either teleported or experienced a dramatic increase in speed.

When compared to the performance of traditional neural networks that are capable of learning from observation but do not explicitly log the representation of an object, researchers found that the ADEPT network was much more accurate at discriminating between surprising and unsurprising scenes and that ADEPT’s performance aligned with human reactions more closely.

The MIT research team is aiming to do more research and gain deeper insight into how infants observe the world around them and learn from these observations, incorporating their findings into new versions of the ADEPT model.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Can Avoid Specific Unwanted Behaviors With New Algorithms

mm

Published

on

AI Can Avoid Specific Unwanted Behaviors With New Algorithms

As artificial intelligence algorithms and systems become more sophisticated and take on bigger responsibilities, it becomes more and more important to ensure that AI systems avoid dangerous, unwanted behavior. Recently a team of researchers from the University of Massachusetts Amherst and Stanford published a paper that demonstrates how specific AI behavior can be avoided, through the use of a technique that elicits precise mathematical instructions that can be used to tweak the behavior of an AI.

According to TechXplore, the research was predicated on the assumption that unfair/unsafe behaviors can be defined with mathematical functions and variables. If this is true then it should be possible for researchers to train systems to avoid these specific behaviors. The research team aimed to develop a toolkit that could be employed by users of the AI to specify which behaviors they want the AI to avoid, and enable AI engineers to reliably train a system that will avoid unwanted actions when used in real-world scenarios.

Phillip Thomas, the first author on the paper and assistant computer science professor at U of Michigan Amherst, explained that the research team aims to demonstrate that designers of machine learning algorithms can make it easier for AI utilizers to describe unwanted behaviors and have it be highly likely that the AI system will avoid the behavior.

The research team tested their technique by applying it to a common problem in data science, gender bias. The research team aimed to make the algorithms used to predict college student GPA fairer by reducing gender bias. The research team utilized an experimental dataset and instructed their AI system to avoid the creation of models that across the board underestimated /overestimated GPAs for one gender. As a result of the researcher’s instructions, the algorithm created a model that better-predicted student GPAs and had substantially less systemic gender bias than previously existing models. Previous GPA prediction models suffered from bias because bias reduction models were often too limited to be useful, or no bias reduction was used at all.

A different algorithm was also developed by the research team. This algorithm was implemented in an automated insulin pump, and the algorithm was intended to balance both performance and safety. Automated insulin pumps need to decide how large of an insulin dose a patient should be given  After eating, the pump will ideally deliver a dose of insulin just large enough to keep blood sugar levels in check. The insulin doses that are delivered must be neither too large or too small.

Machine learning algorithms are already proficient at identifying patterns in an individuals response to insulin doses, but these existing analysis methods can’t let doctors specify outcomes that should be avoided, such as low blood sugar crashes. In contrast, the research team was able to develop a method that could be trained to deliver insulin doses that stay within the two extremes, preventing either underdosing or overdosing. While, the system isn’t ready for testing in real patients just yet, a more sophisticated AI based on this approach could improve quality of life for those suffering from diabetes.

In the research paper, the researchers refer to the algorithm as a “Seledonian” algorithm. This is in reference to the three laws of robotics described by the Sci-Fi author Isaac Asimov. The implication is that the AI system “may not injure a human being or, through inaction, allow a human being to come to harm.” The research team hopes that their framework will allow AI researchers and engineers to create a variety of algorithms and systems that avoid dangerous behavior. Emma Brunskill, senior author of the paper and Stanford assistant professor of computer science, explained to TechXplore:

“We want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI System Automatically Transforms To Evade Censorship Attempts

mm

Published

on

AI System Automatically Transforms To Evade Censorship Attempts

Research conducted by scientists at the University of Maryland (UMD) has created an AI-powered program that can transform itself to evade internet censorship attempts. As reported by TechXplore,  authoritarian governments who censor the internet and engineers who try to counter censorship attempts are locked in an arms race, with each side trying to outdo the other. Learning to circumvent censorship techniques typically takes more time than developing censorship techniques, but a new system developed by the University of Maryland team could make adapting to censorship attempts easier and quicker.

The tool invented by the research team is dubbed Geneva, which stands for Genetic Evasion. The tool is able to dodge censorship attempts by exploiting bugs and determining failures in the logic of censors, which can be hard to find by humans.

Information on the internet is transported in the form of packets. Small chunks of data start at the sender’s computer where they are dissembled and sent to the receiver’s computer. When they arrive at the receiver’s computer, the information is reassembled. A common method of censoring the internet is the monitoring of packet data created when a search is made on the internet. After monitoring these packets, the censor can block results for certain banned keywords or domain names.

Geneva works by modifying how the packet data is actually broken up and transferred. This means that the censorship algorithms don’t classify the searches or results as banned content, or are otherwise unable to block the connection.

Geneva utilizes a genetic algorithm, a type of algorithm inspired by biological processes. Geneva uses small chunks of code as building blocks in place of DNA strands. The bits of code, or building blocks, can be rearranged into specific combinations that can evade attempts to break up or stall data packets. Geneva’s bits of code are rearranged over multiple generations, utilizing a strategy that combines the instructions that best-evaded censorship in the previous generation to create a new set of instructions/strategies. This evolutionary process enables sophisticated evasion techniques to be created fairly quickly. Geneva is capable of operating as a user browses the web, running in the background of the browser.

Dave Levin, an assistant professor of Computer Science at UMD, explained that Geneva puts anti-censors at a distinct advantage for the first time. Levin also explained that the method the researchers used to create their tool flips traditional censorship evasion strategies on their head. Traditional methods of defeating censorship strategies involve understanding how a censorship strategy works and then reverse-engineering methods to beat it. However, in th case of Geneva, the program figures out how to evade the censor and then the researchers analyze what censorship strategies are being used.

In order to test their tool’s performance, the research team tested Geneva out on a computer located in China equipped with an unmodified Google Chrome browser. When the research team used the strategies that Geneva identified, they were able to browse for keyword results without censorship. The tool also proved useful in India and Kazahkstan, which also block certain URLs.

The research team aims to release the code and data used to create the model sometime soon, hoping that it will give people in authoritarian countries better, more open access to information. The research team is also experimenting with a method of deploying the tool on the device that serves the blocked content instead of the client’s computer (the computer that makes the search). If successful, this would mean that people could access blocked content without installing the tool on their computers.

“If Geneva can be deployed on the server-side and work as well as it does on the client-side, then it could potentially open up communications for millions of people,” Levin said. “That’s an amazing possibility, and it’s a direction we’re pursuing.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading