Connect with us

Ethics

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

mm

Published

 on

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

Just recently, Google announced the creation of a new cloud platform intended to make gaining insight into how an AI program renders decisions, making debugging a program easier and enhancing transparency. As reported by The Register, the cloud platform is called Explainable AI, and it marks a major attempt by Google to invest in AI explainability.

Artificial neural networks are employed in many, perhaps most, of the major AI systems employed in the world today. The neural networks that run major AI applications can be extraordinarily complex and large, and as a system’s complexity grows it becomes harder and harder to intuit why a particular decision has been made by the system. As Google explains in their white paper, as AI systems become more powerful, they also become more complex and hence harder to debug. Transparency is also lost when this occurs, which means that biased algorithms can be difficult to recognize and address.

The fact that the reasoning which drives the behavior of complex systems is so hard to interpret often has drastic consequences. In addition to making it hard to combat AI bias, it can make it extraordinarily difficult to tell spurious correlations from genuinely important and interesting correlations.

Many companies and research groups are exploring how to address the “black box” problem of AI and create a system that adequately explains why certain decisions have been made by an AI. Google’s Explainable AI platform represents its own bid to tackle this challenge. Explainable AI is comprised of three different tools. The first tool a system that describes which features have been selected by an AI and it also displays an attribution score which represents the amount of influence that a particular feature has on the final prediction. Google’s report on the tool gives an example of predicting how long a bike ride will last based on variables like rainfall, current temperature, day of the week, and start time. After the network renders the decision, feedback is given that displays which features had the most impact on the predictions.

How does this tool provide such feedback in the case of image data? In this case, the tool produces an overlay that highlights the regions of the image that weighted most heavily on the rendered decision.

Another tool found in the toolkit is the “What-If” tool, which displays potential fluctuations in model performance as individual attributes are manipulated. Finally, the last tool enables can be set up to give sample results to human reviewers on a consistent schedule.

Dr. Andrew Moore, Google’s chief scientist for AI and machine learning, described the inspiration for the project. Moore explained that around five years ago the academic community started to become concerned about the harmful byproducts of AI use and that Google wanted to ensure their systems were only being used in ethical ways. Moore described an incident where the company was trying to design a computer vision program to alert construction workers if someone wasn’t wearing a helmet, but they become concerned that the monitoring could be taken too far and become dehumanizing. Moore said there was a similar reason that Google decided not to release a general face recognition API, as the company wanted to have more control over how their technology was used and ensure it was only being used in ethical ways.

Moore also highlighted why it was so important for AI’s decision to be explainable:

“If you’ve got a safety critical system or a societally important thing which may have unintended consequences if you think your model’s made a mistake, you have to be able to diagnose it. We want to explain carefully what explainability can and can’t do. It’s not a panacea.

Spread the love

Artificial Neural Networks

AI Model Used To Map Dryness Of Forests, Predict Wildfires

mm

Published

on

AI Model Used To Map Dryness Of Forests, Predict Wildfires

A new deep learning model designed by researchers from Stanford University leverages moisture levels across 12 different states in order to assist in the prediction of wildfires and help fire management teams get ahead of potentially destructive wildfires.

Fire management teams aim to predict where the worst blazes might occur, in order that preventative measures like prescribed burns can be carried out. Predicting points of origin and spreading patterns for wildfires mandates information regarding fuel amounts and moisture levels for the target region. Collecting this data and analyzing it at the speed required to be useful to wildfire management teams is difficult, but deep learning models could help automate these critical processes.

As Futurity recently reported, researchers from Stanford University collected climate data and designed a model intended to render detailed maps of moisture levels across 12 western states, including the Pacific Coast states, Texas, Wyoming, Montana, and the southwest states. According to the researchers, although the model is still undergoing refinement it is already capable of revealing areas at high-risk for forest fires where the landscape is unusually dry.

The typical method of collecting data regarding fuel and moisture levels for a target region is by painstakingly comparing dried out vegetation to more moist vegetation. Specifically, researchers collect vegetation samples from trees and weigh them. Afterwards, the vegetation samples are dried out and reweighted. Comparisons are made between the weight of the dry samples and the wet samples to determine the amount of moisture in the vegetation. This process is a long, complex one that is only viable in certain areas and for some species of vegetation. However, the data collected from decades of this process has been used to create the National Fuel Moisture Database, comprised of over 200,000 records. The fuel-moisture content of a region is well known to be linked to the risk of wildfire, though it’s still unknown just how much of a role it plays between ecosystems and from one plant to other plants.

Krishna Rao, PhD student in earth systems science at Stanford was the lead author or the new study, and Rao explained to Futurity that machine learning affords researchers the ability to test assumptions about links between live fuel moisture and weather for different ecosystems. Rao and colleagues trained a recurrent neural network model on data from the National Fuel Moisture Database. The model was then tested by estimating fuel moisture levels based on measurements collected by space sensors. The data included signals from synthetic aperture radar (SAR), which is microwave radar signals that penetrate to the surface, and visible light bouncing off the planet’s surface. The training and validation data for the model consisted of  three years of data for approximately 240 sites across the western US starting in 2015.

The researchers ran analyses on various types of land coverage, including sparse vegetation, grasslands, shrublands, needleleaf evergreen forests, and broadleaf deciduous forests. The model’s predictions were the most accurate, most reliably matched the NFMD measurement, on shrubland regions. This is fortunate, as shrublands comprise approximately 45% of the ecosystems found throughout the US west. Shrublands, particularly chaparral shrublands, are often uniquely susceptible to fire, as seen in many of the fires that burned throughout California over recent years.

The predictions generated by the model have been used to create an interactive map that fire management agencies could one day use to prioritize regions for fire control and discern other relevant patterns. The researchers believe that with further training and refinement the model could.

As Alexandra Konings, assistant professor of earth systems science at Stanford, explained to Futurity:

“Creating these maps was the first step in understanding how this new fuel moisture data might affect fire risk and predictions. Now we’re trying to really pin down the best ways to use it for improved fire prediction.”

Spread the love
Continue Reading

Artificial Neural Networks

Researchers Develop Method for Artificial Neuronal Networks to Communicate with Biological Ones

Published

on

Researchers Develop Method for Artificial Neuronal Networks to Communicate with Biological Ones

A group of researchers has developed a way for artificial neuronal networks to communicate with biological neuronal networks. The new development is a big step forward for neuroprosthetic devices, which replace damaged neurons with artificial neuronal circuitry. 

The new method relies on the conversion of artificial electrical spiking signals to a visual pattern. That is then used, via optogenetic stimulation, in order to entrain the biological neurons. 

The article titled “Toward neuroprosthetic real-time communication from in silico to biological neuronal network via patterned optogenetic stimulation” was published in Scientific Reports.

Neuroprosthetic Technology

An international team led by Ikerbasque Researcher Paolo Bonifazi from Biocruces Health Research Institute in Bilbao, Spain, set out to create neuroprosthetic technology. He was joined by Timothée Levi from Institute of Industrial Science, The University of Tokyo.

One of the biggest challenges surrounding this technology is that neurons in the brain are extremely precise when communicating. When it comes to electrical neural networks, electrical output is not capable of targeting specific neurons. 

To get around this, the team of researchers converted the electrical signals to light. 

According to Levi, “advances in optogenetic technology allowed us to precisely target neurons in a very small area of our biological neuronal network.”

Optogenetics

Optogenetics is a technology that relies on the light-sensitive proteins that are found in algae and other animals. When these proteins are inserted into neurons, light can be shined onto a neuron to make it active or inactive, depending on the type of protein. 

The researchers used specific proteins that were activated by blue light in the project. The first step was to convert the electrical output of the spiking neuronal network into a checkered pattern made up of blue and black squares. This pattern was then projected by light down onto a 0.8 by 0.8 mm square of the biological neural network, which was growing in a dish. When this happened, only the neurons hit by the light coming from the blue squares were activated. 

Synchronous activity is produced in cultured neurons whenever there is spontaneous activity. This results in a type of rhythm that is based on the way the neurons are connected together, the different types of neurons, and how they adapt and change. 

“The key to our success,” says Levi, “was understanding that the rhythms of the artificial neurons had to match those of the real neurons. Once we were able to do this, the biological network was able to respond to the “melodies” sent by the artificial one. Preliminary results obtained during the European Brainbow project, help us to design these biomimetic artificial neurons.”

The researchers eventually found the best match after the artificial neural network was tuned to different rhythms, and they were able to identify changes in the global rhythms of the biological network.

“Incorporating optogenetics into the system is an advance towards practicality,” says Levi. “It will allow future biomimetic devices to communicate with specific types of neurons or within specific neuronal circuits.”

The future prosthetic devices that are developed with the system could replace damaged brain circuits. They could also restore communication between different regions of the brain. All of this could lead to an extremely impressive generation of neuroprosthesis. 

 

Spread the love
Continue Reading

Artificial Neural Networks

Engineers Develop Energy-Efficient “Early Bird” Method to Train Deep Neural Networks

Published

on

Engineers Develop Energy-Efficient "Early Bird” Method to Train Deep Neural Networks

Engineers at Rice University have developed a new method for training deep neural networks (DNNs) with a fraction of the energy normally required. DNNs are the form of artificial intelligence (AI) that plays a key role in the development of technologies such as self-driving cars, intelligent assistants, facial recognition, and other applications.

Early Bird was detailed in a paper on April 29 by researchers from Rice and Texas A&M University. It took place at the International Conference on Learning Representations, or ICLR 2020. 

The study’s lead authors were Haoran You and Chaojian Li from Rice’s Efficient and Intelligent Computing (EIC) Lab. In one study, they demonstrated how the method could train a DNN at the same level and accuracy as today’s methods, but using 10.7 times less energy. 

The research was led by EIC Lab director Yingyan Lin, Rice’s Richard Baraniuk, and Texas A&M’s Zhangyang Wang. Other co-authors include Pengfei Xu, Yonggan Fu, Yue Wang, and Xiaohan Chen. 

“A major driving force in recent AI breakthroughs is the introduction of bigger, more expensive DNNs,” Lin said. “But training these DNNs demands considerable energy. For more innovations to be unveiled, it is imperative to find ‘greener’ training methods that both address environmental concerns and reduce financial barriers of AI research.”

Expensive to Train DNNs

It can be very expensive to train the world’s best DNNs, and the price-tag continues to increase. In 2019, a study led by the Allen Institute for AI in Seattle found that in order to train a top-flight deep neural network, 300,000 times more computations are needed compared to 2012-2018. Another 2019 study, this time led by researchers at the University of Massachusetts Amherst, found that by training a single, elite DNN, about the same amount of carbon dioxide emissions are released as five U.S. automobiles. 

In order for DNNs to perform their highly specialized tasks, they consist of at least millions of artificial neurons. They are capable of learning how to make decisions, sometimes outperforming humans, by observing large numbers of examples. They can do this without needing explicit programming. 

Prune and Train

Lin is an assistant professor of electrical and computer engineering in Rice’s Brown School of Engineering. 

“The state-of-art way to perform DNN training is called progressive prune and train,” Lin said. “First, you train a dense, giant network, then remove parts that don’t look important — like pruning a tree. Then you retrain the pruned network to restore performance because performance degrades after pruning. And in practice you need to prune and retrain many times to get good performance.”

This method is used since not all of the artificial neurons are needed to complete the specialized task. The connections between neurons are fortified due to the training, and others can be discarded. This pruning method cuts computational costs and reduces model size, which makes fully trained DNNs more affordable. 

“The first step, training the dense, giant network, is the most expensive,” Lin said. “Our idea in this work is to identify the final, fully functional pruned network, which we call the ‘early-bird ticket,’ in the beginning stage of this costly first step.”

The researchers do this by looking for key network connectivity patterns, and they were able to discover these early-bird tickets. This allowed them to quicken the DNN training. 

Early Bird in the Beginning Phase of Training

Lin and the other researchers found that Early Bird could appear one-tenth or less of the way through the beginning phase of training. 

“Our method can automatically identify early-bird tickets within the first 10% or less of the training of the dense, giant networks,” Lin said. “This means you can train a DNN to achieve the same or even better accuracy for a given task in about 10% or less of the time needed for traditional training, which can lead to more than one order savings in both computation and energy.”

Besides being faster and more energy-efficient, the researchers have a strong focus on environmental impact. 

“Our goal is to make AI both more environmentally friendly and more inclusive,” she said. “The sheer size of complex AI problems has kept out smaller players. Green AI can open the door enabling researchers with a laptop or limited computational resources to explore AI innovations.”

The research received support from the National Science Foundation. 

 

Spread the love
Continue Reading