Connect with us

Artificial Neural Networks

AI Can Avoid Specific Unwanted Behaviors With New Algorithms

mm

Published

 on

As artificial intelligence algorithms and systems become more sophisticated and take on bigger responsibilities, it becomes more and more important to ensure that AI systems avoid dangerous, unwanted behavior. Recently a team of researchers from the University of Massachusetts Amherst and Stanford published a paper that demonstrates how specific AI behavior can be avoided, through the use of a technique that elicits precise mathematical instructions that can be used to tweak the behavior of an AI.

According to TechXplore, the research was predicated on the assumption that unfair/unsafe behaviors can be defined with mathematical functions and variables. If this is true then it should be possible for researchers to train systems to avoid these specific behaviors. The research team aimed to develop a toolkit that could be employed by users of the AI to specify which behaviors they want the AI to avoid, and enable AI engineers to reliably train a system that will avoid unwanted actions when used in real-world scenarios.

Phillip Thomas, the first author on the paper and assistant computer science professor at U of Michigan Amherst, explained that the research team aims to demonstrate that designers of machine learning algorithms can make it easier for AI utilizers to describe unwanted behaviors and have it be highly likely that the AI system will avoid the behavior.

The research team tested their technique by applying it to a common problem in data science, gender bias. The research team aimed to make the algorithms used to predict college student GPA fairer by reducing gender bias. The research team utilized an experimental dataset and instructed their AI system to avoid the creation of models that across the board underestimated /overestimated GPAs for one gender. As a result of the researcher’s instructions, the algorithm created a model that better-predicted student GPAs and had substantially less systemic gender bias than previously existing models. Previous GPA prediction models suffered from bias because bias reduction models were often too limited to be useful, or no bias reduction was used at all.

A different algorithm was also developed by the research team. This algorithm was implemented in an automated insulin pump, and the algorithm was intended to balance both performance and safety. Automated insulin pumps need to decide how large of an insulin dose a patient should be given  After eating, the pump will ideally deliver a dose of insulin just large enough to keep blood sugar levels in check. The insulin doses that are delivered must be neither too large or too small.

Machine learning algorithms are already proficient at identifying patterns in an individuals response to insulin doses, but these existing analysis methods can’t let doctors specify outcomes that should be avoided, such as low blood sugar crashes. In contrast, the research team was able to develop a method that could be trained to deliver insulin doses that stay within the two extremes, preventing either underdosing or overdosing. While, the system isn’t ready for testing in real patients just yet, a more sophisticated AI based on this approach could improve quality of life for those suffering from diabetes.

In the research paper, the researchers refer to the algorithm as a “Seledonian” algorithm. This is in reference to the three laws of robotics described by the Sci-Fi author Isaac Asimov. The implication is that the AI system “may not injure a human being or, through inaction, allow a human being to come to harm.” The research team hopes that their framework will allow AI researchers and engineers to create a variety of algorithms and systems that avoid dangerous behavior. Emma Brunskill, senior author of the paper and Stanford assistant professor of computer science, explained to TechXplore:

“We want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems.”

Spread the love

Artificial Neural Networks

Neural Network Makes it Easier to Identify Different Points in History

Published

 on

One area that is not covered as much in terms of artificial intelligence (AI) potential is how it can be used in history, anthropology, archaeology, and other similar fields. This is being demonstrated by new research that shows how machine learning can act as a tool for archaeologists to differentiate between two major periods: the Middle Stone Age (MSA) and the Later Stone Age (LSA). 

This differentiation may seem like something that academia and archaeologists already have established, but that is far from the case. In many instances, it is not easy to distinguish between the two. 

MSA and LSA 

Around 300 thousand years ago, the first MSA toolkits appeared during the same time as the earliest fossils of Homo Sapiens. Those same tool kits were used all the way up until about 30 thousand years ago. A major shift in behavior took place around 67 thousand years ago when there were changes in stone tool production, and the resulting toolkits were LSA. 

LSA toolkits were still being used in the recent past, and it is now becoming clear that the shift from MSA to LSA was anything but a linear process. The changes took place throughout different times and in different places, which is why researchers are so focused on this process that can help explain cultural innovation and creativity. 

The foundation of this understanding is the differentiation between MSA and LSA.

Dr. Jimbob Blinkhorn is an archaeologist from the Pan African Evolution Research Group, Max Planck Institute for the Science of Human History and the Centre for Quaternary Research, Department of Geography, Royal Holloway. 

“Eastern Africa is a key region to examine this major cultural change, not only because it hosts some of the youngest MSA sites and some of the oldest LSA sites, but also because a large number of well excavated and dated sites make it ideal for research using quantitative methods,” Dr. Blinkhorn says. “This enabled us to pull together a substantial database of changing patterns to stone tool production and use, spanning 130 to 12 thousand years ago, to examine the MSA-LSA transition.” 

Artificial Neural Networks (ANNs) 

The study is based on 16 alternate tool types across 92 stone tool assemblages, with a focus on their presence or absence. The study emphasizes the constellations of tool forms that often occur together rather than each individual tool. 

Dr. Matt Grove is an archaeologist at the University of Liverpool.

“We’ve employed an Artificial Neural Network (ANN) approach to train and test models that differentiate LSA assemblages from MSA assemblages, as well as examining chronological difference between older (130-71 thousand years ago) and younger (71-28 thousand years ago) MSA assemblages with a 94% success rate,” Dr. Glove says. 

Artificial Neural Networks (ANNs) mimic certain information processing features of the human brain, and the processing power is heavily reliant on the action of many simple units acting together. 

“ANNs have sometimes been described as a ‘black box’ approach, as even when they are highly successful, it may not always be clear exactly why,” Grove says. “We employed a simulation approach that breaks open this black box to understand which inputs have a significant impact on the results. This enabled us to identify how patterns of stone tool assemblage composition vary between the MSA and LSA, and we hope this demonstrates how such methods can be used more widely in archaeological research in the future.” 

“The results of our study show that MSA and LSA assemblages can be differentiated based on the constellation of artifact types found within an assemblage alone,” Blinkhorn says. “The combined occurrence of backed pieces, blade and bipolar technologies together with the combined absence of core tools, Levallois flake technology, point technology and scrapers robustly identifies LSA assemblages, with the opposite pattern identifying MSA assemblages. Significantly, this provides quantified support to qualitative differences noted by earlier researchers that key typological changes do occur with this cultural transition.”

The team will now use the newly developed method to look further into cultural change in the African Stone Age. 

“The approach we’ve employed offers a powerful toolkit to examine the categories we use to describe the archaeological record and to help us examine and explain cultural change amongst our ancestors,” Blinkhorn says.

 

Spread the love
Continue Reading

Artificial Neural Networks

Researchers Develop “DeepTrust” Tool to Help Increase AI Trustworthiness

Published

 on

The safety and trustworthiness of artificial intelligence (AI) is one of the biggest aspects of the technology. It is constantly being improved and worked on by top experts within the different fields, and it will be crucial to the full implementation of AI throughout society. 

Some of that new work is coming out of the University of Southern California, where USC Viterbi Engineering researchers have developed a new tool capable of generating automatic indicators for whether or not AI algorithms are trustworthy in their data and predictions. 

The research was published in Frontiers in Artificial Intelligence, titled “There is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks”. The authors of the paper include Mingxi Cheng, Shahin Nazarian, and Paul Bogdan of the USC Cyber Physical Systems Group. 

Trustworthiness of Neural Networks

One of the biggest tasks in this area is getting neural networks to generate predictions that can be trusted. In many cases, this is what stops the full adoption of technology that relies on AI. 

For example, self-driving vehicles are required to act independently and make accurate decisions on auto-pilot. They need to be capable of making these decisions extremely quickly, while deciphering and recognizing objects on the road. This is crucial, especially in scenarios where the technology would have to decipher the difference between a speed bump, some other object, or a living being. 

Other scenarios include the self-driving vehicle deciding what to do when another vehicle faces it head-on, and the most complex decision of all is if that self-driving vehicle needs to decide between hitting what it perceives as another vehicle, some object, or a living being.

This all means we are putting an extreme amount of trust into the capability of the self-driving vehicle’s software to make the correct decision in just fractions of a second. It becomes even more difficult when there is conflicting information from different sensors, such as computer vision from cameras and Lidar. 

Lead author Minxi Cheng decided to take this project up after thinking, “Even humans can be indecisive in certain decision-making scenarios. In cases involving conflicting information, why can’t machines tell us when they don’t know?”

DeepTrust

The tool that was created by the researchers is called DeepTrust, and it is able to quantify the amount of uncertainty, according to Paul Bogdan, an associate professor in the Ming Hsieh Department of Electrical and Computer Engineering. 

The team spent nearly two years developing DeepTrust, primarily using subjective logic to assess the neural networks. In one example of the tool working, it was able to look at the 2016 presidential election polls and predict that there was a greater margin of error for Hillary Clinton winning. 

The DeepTrust tool also makes it easier to test the reliability of AI algorithms normally trained on up to millions of data points. The other way to do this is by independently checking each one of the data points to test accuracy, which is an extremely time consuming task. 

According to the researchers, the architecture of these neural network systems is more accurate, and accuracy and trust can be maximized simultaneously.

“To our knowledge, there is no trust quantification model or tool for deep learning, artificial intelligence and machine learning. This is the first approach and opens new research directions,” Bogdan says. 

Bogdan also believes that DeepTrust could help push AI forward to the point where it is “aware and adaptive.”

 

Spread the love
Continue Reading

Artificial Neural Networks

AI Researchers Design Program To Generate Sound Effects For Movies and Other Media

mm

Published

 on

Researchers from the University of Texas San Antonio have created an AI-based application capable of observing the actions taking place in a video and creating artificial sound effects to match those actions. The sound effects generated by the program are reportedly so realistic that when human observers were polled, they typically thought the sound effects were legitimate.

The program responsible for generating the sound effects, AudioFoley, was detailed in a study recently published in IEEE Transactions on Multimedia. According to IEEE Spectrum, the AI program was developed by Jeff Provost, professor at UT San Antonio, and Ph.D. student Sanchita Ghose. The researchers created the program utilizing multiple machine learning models joined together.

The first task in generating sound effects appropriate to the actions on a screen was recognizing those actions and mapping them to sound effects. To accomplish this, the researchers designed two different machine learning models and tested their different approaches. The first model operates by extracting frames from the videos it is fed and analyzing these frames for relevant features like motions and colors. Afterward, a second model was employed to analyze how the position of an object changes across frames, to extract temporal information. This temporal information is used to anticipate the next likely actions in the video. The two models have different methods of analyzing the actions in the clip, but they both use the information contained in the clip to guess what sound would best accompany it.

The next task is to synthesize the sound, and this is accomplished by matching activities/predicted motions to possible sound samples. According to Ghose and Prevost, AutoFoley was used to generate sound for 1000 short clips, featuring actions and items like a fire, a running horse, ticking clocks, and rain falling on plants. While AutoFoley was most successful in creating sound for clips where there didn’t need to be a perfect match between the actions and sounds, and it had trouble matching clips where actions happened with more variation, the program was still able to fool many human observers into picking its generated sounds over the sound that originally accompanied a clip.

Prevost and Ghose recruited 57 college students and had them watch different clips. Some clips contained the original audio, some contained audio generated by AutoFoley. When the first model was tested, approximately 73% of the students selected the synthesized audio as the original audio, neglecting the true sound that accompanied the clip. The other model performed slightly worse, with only 66% of the participants selecting the generated audio over the original audio.

Prevost explained that AutoFoley could potentially be used to expedite the process of producing movies, television, and other pieces of media. Prevost notes that a realistic Foley track is important to making media engaging and believable, but that the Foley process often takes a significant amount of time to complete. Having an automated system that could handle the creation of basic Foley elements could make producing media cheaper and quicker.

Currently, AutoFoley has some notable limitations. For one, while the model seems to perform well while observing events that have stable, predictable motions, it suffers when trying to generate audio for events with variation in time (like thunderstorms).  Beyond this, it also requires that the classification subject is present in the entire clip and doesn’t leave the frame. The research team is aiming to address these issues with future versions of the application.

Spread the love
Continue Reading