Connect with us

Artifical Neural Networks

Computer Able to Identify 200 Species of Birds from One Photo

Published

 on

Computer Able to Identify 200 Species of Birds from One Photo

Researchers from Duke University used machine learning in order to train a computer to identify up to 200 different species of birds. The computer only requires one photo to complete the identification process. For a human, years of birdwatching is often required to be able to identify different species from each other. 

The research was led by Duke computer science Ph.D. student Chaofan Chen, along with undergraduate Oscar Li. It was also worked on by other team members of the Prediction Analysis Lab directed by Duke professor Cynthia Rudin. 

A.I. Showing Its Thinking

While the identification process is impressive, there is a more important aspect of the developments. The A.I. is able to show its thinking, allowing even an inexperienced bird watcher to understand the process. 

The deep neural network, or algorithms that are based on the workings of the brain, were trained with 11,788 photos. The photos included 200 different species of birds, including everything from ducks to hummingbirds. 

The team of researchers did not have to specifically train the network to identify beaks or wing feathers. Instead, the network is able to take a photo of a bird and identify certain patterns in the image. It can then take those patterns and identify previous patterns that it already encountered in typical species traits. 

According to the team, the network then creates a series of heat maps that identify certain traits. For example, it can tell the difference between an ordinary warbler and a hooded warbler, along with the different features like a masked head and yellow belly. It then shows that these features are what led to the identification. 

Unlike Other Systems

The neural network was able to identify the correct species up to 84% of the time. This is similar to some of the best-performing systems. The difference is that those systems don’t explain the thinking process like this one. 

According to Rudin, this project’s most revolutionary aspect is that it provides visualization for what deep neural networks see when they look at an image. 

This technology is also currently used on social media sites, to identify suspected criminals in surveillance cameras, and help autonomous vehicles identify traffic lights and pedestrians. 

Deep learning software often does not require to be explicitly programmed in order to learn from data, which is not the case for traditional software. However, the process is not always clear or shown, so it is often difficult to explain how the algorithms “think” when classifying an image. 

In the Future

Rudin and others are currently working on new deep learning models for A.I., pushing the field forward. The new models can explain their reasoning and identification process. This helps the researchers see from start to finish, and it makes it easier to identify the reason behind a mistake or problem. 

Rudin and her team will work on using the algorithm in the medical field. It could identify certain problem areas within medical images like mammograms. This would help medical professionals detect lumps, calcifications, and other signs of breast cancer. 

According to Rudin, the network mimics the way doctors make a diagnosis. 

“It’s cased-based reasoning,” Rudin said. “We’re hoping we can better explain to physicians or patients why their image was classified by the network as either malignant or benign.”

The team will present a paper including their research at the Thirty-third Conference on Neural Information Processing Systems (NeurlIPS2019) in Vancouver on December 12. 

The study also includes authors Daniel Tao and Alina Barnerr of Duke and Jonathan Su at MIT Lincoln Laboratory. 

 

Spread the love

Deep Learning Specialization on Coursera

Artifical Neural Networks

AI System Automatically Transforms To Evade Censorship Attempts

mm

Published

on

AI System Automatically Transforms To Evade Censorship Attempts

Research conducted by scientists at the University of Maryland (UMD) has created an AI-powered program that can transform itself to evade internet censorship attempts. As reported by TechXplore,  authoritarian governments who censor the internet and engineers who try to counter censorship attempts are locked in an arms race, with each side trying to outdo the other. Learning to circumvent censorship techniques typically takes more time than developing censorship techniques, but a new system developed by the University of Maryland team could make adapting to censorship attempts easier and quicker.

The tool invented by the research team is dubbed Geneva, which stands for Genetic Evasion. The tool is able to dodge censorship attempts by exploiting bugs and determining failures in the logic of censors, which can be hard to find by humans.

Information on the internet is transported in the form of packets. Small chunks of data start at the sender’s computer where they are dissembled and sent to the receiver’s computer. When they arrive at the receiver’s computer, the information is reassembled. A common method of censoring the internet is the monitoring of packet data created when a search is made on the internet. After monitoring these packets, the censor can block results for certain banned keywords or domain names.

Geneva works by modifying how the packet data is actually broken up and transferred. This means that the censorship algorithms don’t classify the searches or results as banned content, or are otherwise unable to block the connection.

Geneva utilizes a genetic algorithm, a type of algorithm inspired by biological processes. Geneva uses small chunks of code as building blocks in place of DNA strands. The bits of code, or building blocks, can be rearranged into specific combinations that can evade attempts to break up or stall data packets. Geneva’s bits of code are rearranged over multiple generations, utilizing a strategy that combines the instructions that best-evaded censorship in the previous generation to create a new set of instructions/strategies. This evolutionary process enables sophisticated evasion techniques to be created fairly quickly. Geneva is capable of operating as a user browses the web, running in the background of the browser.

Dave Levin, an assistant professor of Computer Science at UMD, explained that Geneva puts anti-censors at a distinct advantage for the first time. Levin also explained that the method the researchers used to create their tool flips traditional censorship evasion strategies on their head. Traditional methods of defeating censorship strategies involve understanding how a censorship strategy works and then reverse-engineering methods to beat it. However, in th case of Geneva, the program figures out how to evade the censor and then the researchers analyze what censorship strategies are being used.

In order to test their tool’s performance, the research team tested Geneva out on a computer located in China equipped with an unmodified Google Chrome browser. When the research team used the strategies that Geneva identified, they were able to browse for keyword results without censorship. The tool also proved useful in India and Kazahkstan, which also block certain URLs.

The research team aims to release the code and data used to create the model sometime soon, hoping that it will give people in authoritarian countries better, more open access to information. The research team is also experimenting with a method of deploying the tool on the device that serves the blocked content instead of the client’s computer (the computer that makes the search). If successful, this would mean that people could access blocked content without installing the tool on their computers.

“If Geneva can be deployed on the server-side and work as well as it does on the client-side, then it could potentially open up communications for millions of people,” Levin said. “That’s an amazing possibility, and it’s a direction we’re pursuing.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Teaches Itself Laws of Physics

Published

on

AI Teaches Itself Laws of Physics

In what is a monumental moment in both AI and physics, a neural network has “rediscovered” that Earth orbits the Sun. The new development could be critical in solving quantum-mechanics problems, and the researchers hope that it can be used to discover new laws of physics by identifying patterns within large data sets. 

The neural network, named SciNet, was fed measurements showing how the Sun and Mars appear from Earth. Scientists at the Swiss Federal Institute of Technology then tasked SciNet with predicting where the Sun and Mars would be at different times in the future. 

The research will be published in Physical Review Letters. 

Designing the Algorithm

The team, including Physicist Renato Renner, set out to make the algorithm capable of distilling large data sets into basic formulae. This is the same system used by physicists when coming up with equations. In order to do this, the researchers had to base the neural network on the human brain. 

The formulas that were generated by SciNet placed the Sun at the center of our solar system. One of the remarkable aspects of this research was that SciNet did this similarly to how astronomer Nicolaus Copernicus discovered heliocentricity. 

The team highlighted this in a paper published on the preprint repository arXiv. 

“In the 16th century, Copernicus measured the angles between a distant fixed star and several planets and celestial bodies and hypothesized that the Sun, and not the Earth, is in the centre of our solar system and that the planets move around the Sun on simple orbits,” the team wrote. “This explains the complicated orbits as seen from Earth.”

The team tried to get SciNet to predict the movements of the Sun and Mars in the simplest way possible, so SciNet uses two sub-networks to send information back and forth. One of the networks analyzes the data and learns from it, and the other one makes predictions and tests accuracy based on that knowledge. Because these networks are connected together by just a few links, information is compressed and communication is simpler. 

Conventional neural networks learn to identify and recognize objects through huge data sets, and they generate features. Those are then encoded in mathematical ‘nodes,’ which are considered the artificial equivalent of neurons. Unlike physicists, neural networks are more unpredictable and difficult to interpret. 

Artificial Intelligence and Scientific Discoveries 

One of the tests involved giving the network simulated data about the movements of Mars and the Sun, as seen from Earth. The orbit of Mars around the Sun appears unpredictable and often reverses its course. It was in the 1500s when Nicolaus Copernicus discovered that simpler formulas could be used to predict the movements of the planets orbiting the Sun. 

When the neural network “discovered” similar formulas for Mar’s trajectory, it rediscovered one of the most important pieces of knowledge in history. 

Mario Krenn is a physicist at the University of Toronto in Canada, and he works on using artificial intelligence to make scientific discoveries. 

SciNet rediscovered “one of the most important shifts of paradigms in the history of science,” he said. 

According to Renner, humans are still needed to interpret the equations and determine how they are connected to the movement of the planets around the Sun. 

Hod Lipson is a roboticist at Columbia University in New York City. 

“This work is important because it is able to single out the crucial parameters that describe a physical system,” he says. “I think that these kinds of techniques are our only hope of understanding and keeping pace with increasingly complex phenomena, in physics and beyond.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Used To Improve Prediction Of Lightning Strikes

mm

Published

on

AI Used To Improve Prediction Of Lightning Strikes

Weather prediction has gotten substantially better over the course of the past decade, with five-day forecasts now being about 90% accurate. However, one aspect of weather that has long eluded attempts to predict it is lightning. Because lightning is so unpredictable, it’s very difficult to minimize the damage it can do to human lives, property, and nature. Thanks to the work of a research team from the EPFL (Ecole Polytechnique Fédérale de Lausanne) School of Engineering, lightning strikes may be much more predictable in the near future.

As reported by SciTechDaily, a team of researchers from EPFL’ s School of Engineering – Electromagnetic Compatibility Laboratory, recently created an AI program capable of accurately predicting a lightning strike within a period of 10 to 30 minutes away and over a 30-kilometer radius. The system created by the engineering team applies artificial intelligence algorithms to meteorological data, and the system will go on to be utilized in the European Laser Lightning Rod project.

The goal of the European Laser Lightning Rod (ELLR) project is to create new types of lightning protection systems and techniques. Specifically, ELLR aims to create a system that utilizes a laser-based technique to reduce the amount of down-ward natural lightning strikes, accomplished by stimulating upward lightning flashes.

According to the research team, current methods of lightning prediction rely on data gathered by radar or satellite, which tends to be very expensive. Radar is used to scan storms and determine the electrical potential of the storm. Other lightning predictions systems often require the use of three or more receivers in a region in order that occurrences of lightning can be triangulated. Creating predictions in such a fashion is an often slow and complex process.

Instead, the method developed by the EPFL team utilizes data that can be collected at any standard weather station. This means the data is much cheaper and easier to collect and the system could potentially be applied to remote regions where satellite or radar systems don’t cover and where communication networks are spotty.

The data for the predictions can also be gathered quickly and in real-time, which means that a region could potentially be advised of incoming lightning strikes even before a storm has formed in the region. As reported by ScienceDaily, the method that the EPFL team used to make predictions is a machine learning algorithm trained on data collected from 12 Swiss weather stations. The data spanned a decade and both mountainous regions and urban regions were represented in the dataset.

The reason that lightning strikes can be predicted at all is that they are heavily correlated with specific weather conditions. One of the most important ingredients for the formation of lightning is intense convection, where moist air rises as the atmosphere becomes unstable in the local region. Collisions between water droplets, ice particles and other molecules within the clouds can cause electrical charges within the particles to separate. This separation leads to the creation of cloud layers with opposing charges, which leads to the discharges that appear as lightning. The atmospheric features associated with these weather conditions can be fed into machine learning algorithms in order to predict lightning strikes.

Among the features in the dataset were variables like wind speed, relative humidity, air temperature, and atmospheric pressure. Those features were labeled with recorded lightning strikes and the location of the system that detected the strike. Based on these features, the algorithm was able to interpret patterns in the conditions that led to lightning strikes. When the model was tested, it proved able to correctly forecast a lightning strike around 80% of the time.

The EPFL teams’ model is notable because it is the first example of a system based on commonly available meteorological data being able to accurately predict lightning strikes.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading