Connect with us

Regulation

Scalable Autonomous Vehicle Safety Tools Developed By Researchers

mm

Published

 on

Scalable Autonomous Vehicle Safety Tools Developed By Researchers

As the speed of autonomous vehicle manufacturing and deployment increases, the safety of autonomous vehicles becomes even more important. For that reason, researchers are investing in the creation of metrics and tools to track the safety of autonomous vehicles. As reported by ScienceDaily, a research team from the University of Illinois at Urbana-Champaign have used machine learning algorithms to create a scalable autonomous vehicle safety analysis platform, utilizing both hardware and software improvements to do so.

Improving the safety of autonomous vehicles has remained one of the more difficult tasks in AI, because of the many variables involved in the task. Not only are the sensors and algorithms involved in the vehicle extremely complex, but there are many external conditions that are constantly in flux, such as road conditions, topography, weather, lighting and traffic.

The landscape and algorithms of autonomous vehicles are both constantly changing, and companies need a way to keep up with the changes and respond to new issues. The Illinois researchers are working on a platform that lets companies address recently identified safety concerns in a quick, cost-effective method. However, the sheer complexity of the systems that drive autonomous vehicles make this a massive undertaking. The research team is designing a system that will be able to keep track of and update autonomous vehicle systems that contain dozens of processors and accelerators running millions of lines of code.

In general, autonomous vehicles drive quite safely. However, when a failure or unexpected event occurs, an autonomous vehicle is currently more likely to get in an accident than human drivers, as the vehicle often has trouble negotiating sudden emergencies.  While it is admittedly difficult to quantify how safe autonomous vehicles are and what is to blame for accidents, it is obvious that failures of a vehicle going at 70 mph down a road could prove extremely dangerous, hence the need to improve the handling of emergencies by autonomous vehicles.

Saurabh Jha, a doctoral candidate and one of the researchers involved with the program, explained to ScienceDaily the need to improve failure handling in autonomous vehicles. Jha explained:

“If a driver of a typical car senses a problem such as vehicle drift or pull, the driver can adjust his/her behavior and guide the car to a safe stopping point. However, the behavior of the autonomous vehicle may be unpredictable in such a scenario unless the autonomous vehicle is explicitly trained for such problems. In the real world, there are an infinite number of such cases.”

The researchers are aiming to solve this problem by gathering and analyzing data involving safety reports submitted by autonomous vehicle companies.  Companies like Waymo and Uber are required to submit reports to the DMV in California at least annually. These reports contain data on statistics like how far cars have driven, how many accidents occured, and what conditions the vehicles were operating under.

The University of Illinois research team analyzed reports covering the years 2014 to 2017. During this period, autonomous vehicles drove around 1,116,000 miles distributed across 144 different vehicles. According to the findings of the research team, when compared with the same distance driven by human drivers, accidents were 4000 times more likely to occur. The accidents may imply that the AI of the vehicle failed to properly disengage and avoid the accident, relying instead on the human driver to take over.

It’s difficult to diagnose potential errors in the hardware or software of the autonomous vehicle because many errors will manifest only under the correct conditions. It also isn’t feasible to conduct tests under every possible condition that could occur on the road. Instead of collecting data on hundreds of thousands of real miles logged by autonomous vehicles, the research team is utilizing simulated environments to drastically reduce the amount of money and time spent in generating data for the training of AVs.

The research team uses the generated data to explore situations where AV failures can happen and safety issues can occur. It appears that utilizing the simulations can genuinely help companies find safety risks they wouldn’t be able to otherwise. For instance, when the team tested the Apollo AV, created by Baidu, they isolated over 500 instances where the AV failed to handle an emergency situation and an accident occurred as a result. The research team hopes that other companies will make use of their testing platform and improve the safety of their autonomous vehicles.

Spread the love

Deep Learning Specialization on Coursera

Blogger and programmer with specialties in machine learning and deep learning topics. Daniel hopes to help others use the power of AI for social good.

Artifical Neural Networks

AI System Automatically Transforms To Evade Censorship Attempts

mm

Published

on

AI System Automatically Transforms To Evade Censorship Attempts

Research conducted by scientists at the University of Maryland (UMD) has created an AI-powered program that can transform itself to evade internet censorship attempts. As reported by TechXplore,  authoritarian governments who censor the internet and engineers who try to counter censorship attempts are locked in an arms race, with each side trying to outdo the other. Learning to circumvent censorship techniques typically takes more time than developing censorship techniques, but a new system developed by the University of Maryland team could make adapting to censorship attempts easier and quicker.

The tool invented by the research team is dubbed Geneva, which stands for Genetic Evasion. The tool is able to dodge censorship attempts by exploiting bugs and determining failures in the logic of censors, which can be hard to find by humans.

Information on the internet is transported in the form of packets. Small chunks of data start at the sender’s computer where they are dissembled and sent to the receiver’s computer. When they arrive at the receiver’s computer, the information is reassembled. A common method of censoring the internet is the monitoring of packet data created when a search is made on the internet. After monitoring these packets, the censor can block results for certain banned keywords or domain names.

Geneva works by modifying how the packet data is actually broken up and transferred. This means that the censorship algorithms don’t classify the searches or results as banned content, or are otherwise unable to block the connection.

Geneva utilizes a genetic algorithm, a type of algorithm inspired by biological processes. Geneva uses small chunks of code as building blocks in place of DNA strands. The bits of code, or building blocks, can be rearranged into specific combinations that can evade attempts to break up or stall data packets. Geneva’s bits of code are rearranged over multiple generations, utilizing a strategy that combines the instructions that best-evaded censorship in the previous generation to create a new set of instructions/strategies. This evolutionary process enables sophisticated evasion techniques to be created fairly quickly. Geneva is capable of operating as a user browses the web, running in the background of the browser.

Dave Levin, an assistant professor of Computer Science at UMD, explained that Geneva puts anti-censors at a distinct advantage for the first time. Levin also explained that the method the researchers used to create their tool flips traditional censorship evasion strategies on their head. Traditional methods of defeating censorship strategies involve understanding how a censorship strategy works and then reverse-engineering methods to beat it. However, in th case of Geneva, the program figures out how to evade the censor and then the researchers analyze what censorship strategies are being used.

In order to test their tool’s performance, the research team tested Geneva out on a computer located in China equipped with an unmodified Google Chrome browser. When the research team used the strategies that Geneva identified, they were able to browse for keyword results without censorship. The tool also proved useful in India and Kazahkstan, which also block certain URLs.

The research team aims to release the code and data used to create the model sometime soon, hoping that it will give people in authoritarian countries better, more open access to information. The research team is also experimenting with a method of deploying the tool on the device that serves the blocked content instead of the client’s computer (the computer that makes the search). If successful, this would mean that people could access blocked content without installing the tool on their computers.

“If Geneva can be deployed on the server-side and work as well as it does on the client-side, then it could potentially open up communications for millions of people,” Levin said. “That’s an amazing possibility, and it’s a direction we’re pursuing.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Teaches Itself Laws of Physics

Published

on

AI Teaches Itself Laws of Physics

In what is a monumental moment in both AI and physics, a neural network has “rediscovered” that Earth orbits the Sun. The new development could be critical in solving quantum-mechanics problems, and the researchers hope that it can be used to discover new laws of physics by identifying patterns within large data sets. 

The neural network, named SciNet, was fed measurements showing how the Sun and Mars appear from Earth. Scientists at the Swiss Federal Institute of Technology then tasked SciNet with predicting where the Sun and Mars would be at different times in the future. 

The research will be published in Physical Review Letters. 

Designing the Algorithm

The team, including Physicist Renato Renner, set out to make the algorithm capable of distilling large data sets into basic formulae. This is the same system used by physicists when coming up with equations. In order to do this, the researchers had to base the neural network on the human brain. 

The formulas that were generated by SciNet placed the Sun at the center of our solar system. One of the remarkable aspects of this research was that SciNet did this similarly to how astronomer Nicolaus Copernicus discovered heliocentricity. 

The team highlighted this in a paper published on the preprint repository arXiv. 

“In the 16th century, Copernicus measured the angles between a distant fixed star and several planets and celestial bodies and hypothesized that the Sun, and not the Earth, is in the centre of our solar system and that the planets move around the Sun on simple orbits,” the team wrote. “This explains the complicated orbits as seen from Earth.”

The team tried to get SciNet to predict the movements of the Sun and Mars in the simplest way possible, so SciNet uses two sub-networks to send information back and forth. One of the networks analyzes the data and learns from it, and the other one makes predictions and tests accuracy based on that knowledge. Because these networks are connected together by just a few links, information is compressed and communication is simpler. 

Conventional neural networks learn to identify and recognize objects through huge data sets, and they generate features. Those are then encoded in mathematical ‘nodes,’ which are considered the artificial equivalent of neurons. Unlike physicists, neural networks are more unpredictable and difficult to interpret. 

Artificial Intelligence and Scientific Discoveries 

One of the tests involved giving the network simulated data about the movements of Mars and the Sun, as seen from Earth. The orbit of Mars around the Sun appears unpredictable and often reverses its course. It was in the 1500s when Nicolaus Copernicus discovered that simpler formulas could be used to predict the movements of the planets orbiting the Sun. 

When the neural network “discovered” similar formulas for Mar’s trajectory, it rediscovered one of the most important pieces of knowledge in history. 

Mario Krenn is a physicist at the University of Toronto in Canada, and he works on using artificial intelligence to make scientific discoveries. 

SciNet rediscovered “one of the most important shifts of paradigms in the history of science,” he said. 

According to Renner, humans are still needed to interpret the equations and determine how they are connected to the movement of the planets around the Sun. 

Hod Lipson is a roboticist at Columbia University in New York City. 

“This work is important because it is able to single out the crucial parameters that describe a physical system,” he says. “I think that these kinds of techniques are our only hope of understanding and keeping pace with increasingly complex phenomena, in physics and beyond.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Used To Improve Prediction Of Lightning Strikes

mm

Published

on

AI Used To Improve Prediction Of Lightning Strikes

Weather prediction has gotten substantially better over the course of the past decade, with five-day forecasts now being about 90% accurate. However, one aspect of weather that has long eluded attempts to predict it is lightning. Because lightning is so unpredictable, it’s very difficult to minimize the damage it can do to human lives, property, and nature. Thanks to the work of a research team from the EPFL (Ecole Polytechnique Fédérale de Lausanne) School of Engineering, lightning strikes may be much more predictable in the near future.

As reported by SciTechDaily, a team of researchers from EPFL’ s School of Engineering – Electromagnetic Compatibility Laboratory, recently created an AI program capable of accurately predicting a lightning strike within a period of 10 to 30 minutes away and over a 30-kilometer radius. The system created by the engineering team applies artificial intelligence algorithms to meteorological data, and the system will go on to be utilized in the European Laser Lightning Rod project.

The goal of the European Laser Lightning Rod (ELLR) project is to create new types of lightning protection systems and techniques. Specifically, ELLR aims to create a system that utilizes a laser-based technique to reduce the amount of down-ward natural lightning strikes, accomplished by stimulating upward lightning flashes.

According to the research team, current methods of lightning prediction rely on data gathered by radar or satellite, which tends to be very expensive. Radar is used to scan storms and determine the electrical potential of the storm. Other lightning predictions systems often require the use of three or more receivers in a region in order that occurrences of lightning can be triangulated. Creating predictions in such a fashion is an often slow and complex process.

Instead, the method developed by the EPFL team utilizes data that can be collected at any standard weather station. This means the data is much cheaper and easier to collect and the system could potentially be applied to remote regions where satellite or radar systems don’t cover and where communication networks are spotty.

The data for the predictions can also be gathered quickly and in real-time, which means that a region could potentially be advised of incoming lightning strikes even before a storm has formed in the region. As reported by ScienceDaily, the method that the EPFL team used to make predictions is a machine learning algorithm trained on data collected from 12 Swiss weather stations. The data spanned a decade and both mountainous regions and urban regions were represented in the dataset.

The reason that lightning strikes can be predicted at all is that they are heavily correlated with specific weather conditions. One of the most important ingredients for the formation of lightning is intense convection, where moist air rises as the atmosphere becomes unstable in the local region. Collisions between water droplets, ice particles and other molecules within the clouds can cause electrical charges within the particles to separate. This separation leads to the creation of cloud layers with opposing charges, which leads to the discharges that appear as lightning. The atmospheric features associated with these weather conditions can be fed into machine learning algorithms in order to predict lightning strikes.

Among the features in the dataset were variables like wind speed, relative humidity, air temperature, and atmospheric pressure. Those features were labeled with recorded lightning strikes and the location of the system that detected the strike. Based on these features, the algorithm was able to interpret patterns in the conditions that led to lightning strikes. When the model was tested, it proved able to correctly forecast a lightning strike around 80% of the time.

The EPFL teams’ model is notable because it is the first example of a system based on commonly available meteorological data being able to accurately predict lightning strikes.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading