Connect with us

Artifical Neural Networks

AI Teaches Itself Laws of Physics

Published

 on

AI Teaches Itself Laws of Physics

In what is a monumental moment in both AI and physics, a neural network has “rediscovered” that Earth orbits the Sun. The new development could be critical in solving quantum-mechanics problems, and the researchers hope that it can be used to discover new laws of physics by identifying patterns within large data sets. 

The neural network, named SciNet, was fed measurements showing how the Sun and Mars appear from Earth. Scientists at the Swiss Federal Institute of Technology then tasked SciNet with predicting where the Sun and Mars would be at different times in the future. 

The research will be published in Physical Review Letters. 

Designing the Algorithm

The team, including Physicist Renato Renner, set out to make the algorithm capable of distilling large data sets into basic formulae. This is the same system used by physicists when coming up with equations. In order to do this, the researchers had to base the neural network on the human brain. 

The formulas that were generated by SciNet placed the Sun at the center of our solar system. One of the remarkable aspects of this research was that SciNet did this similarly to how astronomer Nicolaus Copernicus discovered heliocentricity. 

The team highlighted this in a paper published on the preprint repository arXiv. 

“In the 16th century, Copernicus measured the angles between a distant fixed star and several planets and celestial bodies and hypothesized that the Sun, and not the Earth, is in the centre of our solar system and that the planets move around the Sun on simple orbits,” the team wrote. “This explains the complicated orbits as seen from Earth.”

The team tried to get SciNet to predict the movements of the Sun and Mars in the simplest way possible, so SciNet uses two sub-networks to send information back and forth. One of the networks analyzes the data and learns from it, and the other one makes predictions and tests accuracy based on that knowledge. Because these networks are connected together by just a few links, information is compressed and communication is simpler. 

Conventional neural networks learn to identify and recognize objects through huge data sets, and they generate features. Those are then encoded in mathematical ‘nodes,’ which are considered the artificial equivalent of neurons. Unlike physicists, neural networks are more unpredictable and difficult to interpret. 

Artificial Intelligence and Scientific Discoveries 

One of the tests involved giving the network simulated data about the movements of Mars and the Sun, as seen from Earth. The orbit of Mars around the Sun appears unpredictable and often reverses its course. It was in the 1500s when Nicolaus Copernicus discovered that simpler formulas could be used to predict the movements of the planets orbiting the Sun. 

When the neural network “discovered” similar formulas for Mar’s trajectory, it rediscovered one of the most important pieces of knowledge in history. 

Mario Krenn is a physicist at the University of Toronto in Canada, and he works on using artificial intelligence to make scientific discoveries. 

SciNet rediscovered “one of the most important shifts of paradigms in the history of science,” he said. 

According to Renner, humans are still needed to interpret the equations and determine how they are connected to the movement of the planets around the Sun. 

Hod Lipson is a roboticist at Columbia University in New York City. 

“This work is important because it is able to single out the crucial parameters that describe a physical system,” he says. “I think that these kinds of techniques are our only hope of understanding and keeping pace with increasingly complex phenomena, in physics and beyond.”

 

Spread the love

Deep Learning Specialization on Coursera

Artifical Neural Networks

AI System Automatically Transforms To Evade Censorship Attempts

mm

Published

on

AI System Automatically Transforms To Evade Censorship Attempts

Research conducted by scientists at the University of Maryland (UMD) has created an AI-powered program that can transform itself to evade internet censorship attempts. As reported by TechXplore,  authoritarian governments who censor the internet and engineers who try to counter censorship attempts are locked in an arms race, with each side trying to outdo the other. Learning to circumvent censorship techniques typically takes more time than developing censorship techniques, but a new system developed by the University of Maryland team could make adapting to censorship attempts easier and quicker.

The tool invented by the research team is dubbed Geneva, which stands for Genetic Evasion. The tool is able to dodge censorship attempts by exploiting bugs and determining failures in the logic of censors, which can be hard to find by humans.

Information on the internet is transported in the form of packets. Small chunks of data start at the sender’s computer where they are dissembled and sent to the receiver’s computer. When they arrive at the receiver’s computer, the information is reassembled. A common method of censoring the internet is the monitoring of packet data created when a search is made on the internet. After monitoring these packets, the censor can block results for certain banned keywords or domain names.

Geneva works by modifying how the packet data is actually broken up and transferred. This means that the censorship algorithms don’t classify the searches or results as banned content, or are otherwise unable to block the connection.

Geneva utilizes a genetic algorithm, a type of algorithm inspired by biological processes. Geneva uses small chunks of code as building blocks in place of DNA strands. The bits of code, or building blocks, can be rearranged into specific combinations that can evade attempts to break up or stall data packets. Geneva’s bits of code are rearranged over multiple generations, utilizing a strategy that combines the instructions that best-evaded censorship in the previous generation to create a new set of instructions/strategies. This evolutionary process enables sophisticated evasion techniques to be created fairly quickly. Geneva is capable of operating as a user browses the web, running in the background of the browser.

Dave Levin, an assistant professor of Computer Science at UMD, explained that Geneva puts anti-censors at a distinct advantage for the first time. Levin also explained that the method the researchers used to create their tool flips traditional censorship evasion strategies on their head. Traditional methods of defeating censorship strategies involve understanding how a censorship strategy works and then reverse-engineering methods to beat it. However, in th case of Geneva, the program figures out how to evade the censor and then the researchers analyze what censorship strategies are being used.

In order to test their tool’s performance, the research team tested Geneva out on a computer located in China equipped with an unmodified Google Chrome browser. When the research team used the strategies that Geneva identified, they were able to browse for keyword results without censorship. The tool also proved useful in India and Kazahkstan, which also block certain URLs.

The research team aims to release the code and data used to create the model sometime soon, hoping that it will give people in authoritarian countries better, more open access to information. The research team is also experimenting with a method of deploying the tool on the device that serves the blocked content instead of the client’s computer (the computer that makes the search). If successful, this would mean that people could access blocked content without installing the tool on their computers.

“If Geneva can be deployed on the server-side and work as well as it does on the client-side, then it could potentially open up communications for millions of people,” Levin said. “That’s an amazing possibility, and it’s a direction we’re pursuing.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Used To Improve Prediction Of Lightning Strikes

mm

Published

on

AI Used To Improve Prediction Of Lightning Strikes

Weather prediction has gotten substantially better over the course of the past decade, with five-day forecasts now being about 90% accurate. However, one aspect of weather that has long eluded attempts to predict it is lightning. Because lightning is so unpredictable, it’s very difficult to minimize the damage it can do to human lives, property, and nature. Thanks to the work of a research team from the EPFL (Ecole Polytechnique Fédérale de Lausanne) School of Engineering, lightning strikes may be much more predictable in the near future.

As reported by SciTechDaily, a team of researchers from EPFL’ s School of Engineering – Electromagnetic Compatibility Laboratory, recently created an AI program capable of accurately predicting a lightning strike within a period of 10 to 30 minutes away and over a 30-kilometer radius. The system created by the engineering team applies artificial intelligence algorithms to meteorological data, and the system will go on to be utilized in the European Laser Lightning Rod project.

The goal of the European Laser Lightning Rod (ELLR) project is to create new types of lightning protection systems and techniques. Specifically, ELLR aims to create a system that utilizes a laser-based technique to reduce the amount of down-ward natural lightning strikes, accomplished by stimulating upward lightning flashes.

According to the research team, current methods of lightning prediction rely on data gathered by radar or satellite, which tends to be very expensive. Radar is used to scan storms and determine the electrical potential of the storm. Other lightning predictions systems often require the use of three or more receivers in a region in order that occurrences of lightning can be triangulated. Creating predictions in such a fashion is an often slow and complex process.

Instead, the method developed by the EPFL team utilizes data that can be collected at any standard weather station. This means the data is much cheaper and easier to collect and the system could potentially be applied to remote regions where satellite or radar systems don’t cover and where communication networks are spotty.

The data for the predictions can also be gathered quickly and in real-time, which means that a region could potentially be advised of incoming lightning strikes even before a storm has formed in the region. As reported by ScienceDaily, the method that the EPFL team used to make predictions is a machine learning algorithm trained on data collected from 12 Swiss weather stations. The data spanned a decade and both mountainous regions and urban regions were represented in the dataset.

The reason that lightning strikes can be predicted at all is that they are heavily correlated with specific weather conditions. One of the most important ingredients for the formation of lightning is intense convection, where moist air rises as the atmosphere becomes unstable in the local region. Collisions between water droplets, ice particles and other molecules within the clouds can cause electrical charges within the particles to separate. This separation leads to the creation of cloud layers with opposing charges, which leads to the discharges that appear as lightning. The atmospheric features associated with these weather conditions can be fed into machine learning algorithms in order to predict lightning strikes.

Among the features in the dataset were variables like wind speed, relative humidity, air temperature, and atmospheric pressure. Those features were labeled with recorded lightning strikes and the location of the system that detected the strike. Based on these features, the algorithm was able to interpret patterns in the conditions that led to lightning strikes. When the model was tested, it proved able to correctly forecast a lightning strike around 80% of the time.

The EPFL teams’ model is notable because it is the first example of a system based on commonly available meteorological data being able to accurately predict lightning strikes.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Used To Recreate Human Brain Waves In Real Time

mm

Published

on

AI Used To Recreate Human Brain Waves In Real Time

Recently, a team of researchers created a neural network that is able to recreate human brain waves in real-time. As reported by Futurism, the research team, comprised of researchers from the Moscow Institute of Physics and Technology (MIPT) and the Neurobotics corporation, were able to visualize a  person’s brain waves by translating the waves with a computer vision neural network, rendering them as images.

The results of the study were published in bioRxiv, and a video was posted alongside the research paper, which showed how the network reconstructed images. The MIPT research team hopes that the study will help them create post-stroke rehabilitation systems that are controlled by brain waves. In order to create rehabilitative devices for stroke victims, neurobiologists have to study the processes the brain uses to encode information. A critical part of understanding these processes is studying how people perceive video information. According to ZME Science, the current methods of extracting images from brain waves typically analyze the signals originating from the neurons, through the use of implants, or extract images using functional MRI.

The research team from Neurbiotics and MIPT utilized electroencephalography, or EEG, which logs brain waves collected from electrodes placed on the scalp.  In such situations, people often wear devices that track their neural signals while they watch a video or look at pictures. The analysis of brain activity yielded input features that could be used in a machine learning system. The machine learning system was able to reconstruct the images a person witnessed, rendering the images on a screen in real-time.

The experiment was divided into multiple parts. In the experiment’s first phase, the researchers had the subjects watch 10-second clips of YouTube videos for around 20 minutes. There were five different categories that the video were divided into: motorsports, human faces, abstract shapes, waterfalls and moving mechanisms. These different categories can contain a variety of objects. For example, the motorsports category contained clips of snowmobiles and motorcycles.

The research team analyzed the EEG data that was collected while the participants watched the videos. The EEGs displayed specific patterns for each of the different video clips, and this meant that the team could potentially interpret what content the participants were seeing on videos in more or less real-time.

The second phase of the experiment had three categories selected at random. Two neural networks were created to work with these two categories. The first network generated random images that belonged to one of three categories, creating them out of random noise that was refined into an image. Meanwhile, the other network generated noise based on the EEG scans. The data in both of the networks were compared and the randomly generated images were updated based on the EEG noise data, until the generated images became similar to the images that the test subjects were seeing.

After the system had been designed, the researchers tested the program’s ability to visualize brain waves by showing the test subjects videos they hadn’t yet seen from the same categories. The EEGs generated during the second round of viewings were given to the networks, and the networks were able to generate images that could be easily placed into the right category 90% of the time.

The researchers noted that the results of their experiment were surprising because for a long time it was assumed that there wasn’t sufficient information in an EEG to reconstruct the images observed by people. However, the results of the research team proved that it can be done.

Vladimir Konyshev, the head of the Neurorobotics Lab at MIPT, explained that although the research team is currently focused on creating assistive technologies for those who are disabled, the technology they are working could be used to create neural control devices for the general population at some point. Konyshev explained to TechXplore:

“We’re working on the Assistive Technologies project of Neuronet of the National Technology Initiative, which focuses on the brain-computer interface that enables post-stroke patients to control an exoskeleton arm for neurorehabilitation purposes, or paralyzed patients to drive an electric wheelchair, for example. The ultimate goal is to increase the accuracy of neural control for healthy individuals, too.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading