Connect with us

Artificial Neural Networks

AI System Automatically Transforms To Evade Censorship Attempts

mm

Published

 on

AI System Automatically Transforms To Evade Censorship Attempts

Research conducted by scientists at the University of Maryland (UMD) has created an AI-powered program that can transform itself to evade internet censorship attempts. As reported by TechXplore,  authoritarian governments who censor the internet and engineers who try to counter censorship attempts are locked in an arms race, with each side trying to outdo the other. Learning to circumvent censorship techniques typically takes more time than developing censorship techniques, but a new system developed by the University of Maryland team could make adapting to censorship attempts easier and quicker.

The tool invented by the research team is dubbed Geneva, which stands for Genetic Evasion. The tool is able to dodge censorship attempts by exploiting bugs and determining failures in the logic of censors, which can be hard to find by humans.

Information on the internet is transported in the form of packets. Small chunks of data start at the sender’s computer where they are dissembled and sent to the receiver’s computer. When they arrive at the receiver’s computer, the information is reassembled. A common method of censoring the internet is the monitoring of packet data created when a search is made on the internet. After monitoring these packets, the censor can block results for certain banned keywords or domain names.

Geneva works by modifying how the packet data is actually broken up and transferred. This means that the censorship algorithms don’t classify the searches or results as banned content, or are otherwise unable to block the connection.

Geneva utilizes a genetic algorithm, a type of algorithm inspired by biological processes. Geneva uses small chunks of code as building blocks in place of DNA strands. The bits of code, or building blocks, can be rearranged into specific combinations that can evade attempts to break up or stall data packets. Geneva’s bits of code are rearranged over multiple generations, utilizing a strategy that combines the instructions that best-evaded censorship in the previous generation to create a new set of instructions/strategies. This evolutionary process enables sophisticated evasion techniques to be created fairly quickly. Geneva is capable of operating as a user browses the web, running in the background of the browser.

Dave Levin, an assistant professor of Computer Science at UMD, explained that Geneva puts anti-censors at a distinct advantage for the first time. Levin also explained that the method the researchers used to create their tool flips traditional censorship evasion strategies on their head. Traditional methods of defeating censorship strategies involve understanding how a censorship strategy works and then reverse-engineering methods to beat it. However, in th case of Geneva, the program figures out how to evade the censor and then the researchers analyze what censorship strategies are being used.

In order to test their tool’s performance, the research team tested Geneva out on a computer located in China equipped with an unmodified Google Chrome browser. When the research team used the strategies that Geneva identified, they were able to browse for keyword results without censorship. The tool also proved useful in India and Kazahkstan, which also block certain URLs.

The research team aims to release the code and data used to create the model sometime soon, hoping that it will give people in authoritarian countries better, more open access to information. The research team is also experimenting with a method of deploying the tool on the device that serves the blocked content instead of the client’s computer (the computer that makes the search). If successful, this would mean that people could access blocked content without installing the tool on their computers.

“If Geneva can be deployed on the server-side and work as well as it does on the client-side, then it could potentially open up communications for millions of people,” Levin said. “That’s an amazing possibility, and it’s a direction we’re pursuing.”

Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.

Artificial Neural Networks

Deep Learning Used to Find Disease-Related Genes

Published

on

Deep Learning Used to Find Disease-Related Genes

A new study led by researchers at Linköping University demonstrates how an artificial neural network (ANN) can reveal large amounts of gene expression data, and it can lead to the discovery of groups of disease-related genes. The study was published in Nature Communications, and the scientists want the method to be applied within precision medicine and individualized treatment. 

Scientists are currently developing maps of biological networks that are based on how different proteins or genes interact with each other. The new study involves the use of artificial intelligence (AI) in order to find out if biological networks can be discovered through the use of deep learning. Artificial neural networks, which are trained by experimental data in the process of deep learning, are able to find patterns within massive amounts of complex data. Because of this, they are often used in applications such as image recognition. Even with its seemingly enormous potential, the use of this machine learning method has been limited within biological research. 

Sanjiv Dwivedi is a postdoc in the Department of Physics, Chemistry and Biology (IFM) at Linköping University.

“We have for the first time used deep learning to find disease-related genes. This is a very powerful method in the analysis of huge amounts of biological information, or ‘big data’,” says Dwivedi.

The scientists relied on a large database with information regarding the expression patterns of 20,000 genes in a large number of people. The artificial neural network was not told which gene expression patterns were from people with diseases, or which ones were from healthy individuals. The AI model was then trained to find patterns of gene expression.

One of the mysteries surrounding machine learning is that it is currently impossible to see how an artificial neural network gets to its final result. It is only possible to see the information that goes in and the information that is produced, but everything that happens in-between consists of several layers of mathematically processed information. These inner workings of an artificial neural network are not yet able to be deciphered. The scientists wanted to know if there were any similarities between the designs of the neural network and the familiar biological networks. 

Mike Gustafsson is a senior lecturer at IFM and leads the study. 

“When we analysed our neural network, it turned out that the first hidden layer represented to a large extent interactions between various proteins. Deeper in the model, in contrast, on the third level, we found groups of different cell types. It’s extremely interesting that this type of biologically relevant grouping is automatically produced, given that our network has started from unclassified gene expression data,” says Gustafsson.

The scientists then wanted to know if their model of gene expression was capable of being used to determine which gene expression patterns are associated with disease and which are normal. They were able to confirm that the model can discover relative patterns that agree with biological mechanisms in the body. Another discovery was that the artificial neural network could possibly discover brand new patterns since it was trained with unclassified data. The researchers will now investigate previously unknown patterns and whether they are relevant within biology. 

“We believe that the key to progress in the field is to understand the neural network. This can teach us new things about biological contexts, such as diseases in which many factors interact. And we believe that our method gives models that are easier to generalise and that can be used for many different types of biological information,” says Gustafsson.

Through collaborations with medical researchers, Gustafsson hopes to apply the method in precision medicine. This could help determine which specific types of medicine patients should receive.

The study was financially supported by the Swedish Foundation for Strategic Research (SSF) and the Swedish Research Council.

 

Spread the love
Continue Reading

Artificial Neural Networks

Deep Learning System Can Accurately Predict Extreme Weather

Published

on

Deep Learning System Can Accurately Predict Extreme Weather

Engineers at Rice University have developed a deep learning system that is capable of accurately predicting extreme weather events up to five days in advance. The system, which taught itself, only requires minimal information about current weather conditions in order to make the predictions.             

Part of the system’s training involves examining hundreds of pairs of maps, and each map indicates surface temperatures and air pressures at five-kilometers height. Those conditions are shown several days apart. The training also presents scenarios that produced extreme weather, such as hot and cold spells that can cause heat waves and winter storms. Upon completing the training, the deep learning system was able to make five-day forecasts of extreme weather based on maps it had not previously seen, with an accuracy rate of 85%.

According to Pedram Hassanzadeh, co-author of the study which was published online in the American Geophysical Union’s Journal of Advances in Modeling Earth Systems, the system could be used as a tool and act as an early warning for weather forecasters. It will be especially useful for learning more about certain atmospheric conditions that cause extreme weather scenarios. 

Because of the invention of computer-based numerical weather prediction (NWP) in the 1950s, day-to-day weather forecasts have continued to improve. However, NWP is not able to make reliable predictions about extreme weather events, such as heat waves. 

“It may be that we need faster supercomputers to solve the governing equations of the numerical weather prediction models at higher resolutions,” said Hassanzadeh, an assistant professor of mechanical engineering and of Earth, environmental and planetary sciences at Rice University. “But because we don’t fully understand the physics and precursor conditions of extreme-causing weather patterns, it’s also possible that the equations aren’t fully accurate, and they won’t produce better forecasts, no matter how much computing power we put in.”

In 2017, Hassanzadeh was joined by study co-authors and graduate students Ashesh Chattopadhyay and Ebrahim Nabizadeh. Together, they set out on a different path. 

“When you get these heat waves or cold spells, if you look at the weather map, you are often going to see some weird behavior in the jet stream, abnormal things like large waves or a big high-pressure system that is not moving at all,” Hassanzadeh said. “It seemed like this was a pattern recognition problem. So we decided to try to reformulate extreme weather forecasting as a pattern-recognition problem rather than a numerical problem.”

“We decided to train our model by showing it a lot of pressure patterns in the five kilometers above the Earth, and telling it, for each one, ‘This one didn’t cause extreme weather. This one caused a heat wave in California. This one didn’t cause anything. This one caused a cold spell in the Northeast,'” Hassanzadeh continued. “Not anything specific like Houston versus Dallas, but more of a sense of the regional area.”

Prior to computers, analog forecasting was used for weather prediction. It was done in a very similar way to the new system, but it was humans instead of computers. 

“One way prediction was done before computers is they would look at the pressure system pattern today, and then go to a catalog of previous patterns and compare and try to find an analog, a closely similar pattern,” Hassanzadeh said. “If that one led to rain over France after three days, the forecast would be for rain in France.”

Now, neural networks can learn on their own and do not necessarily need to rely on humans to find connections. 

“It didn’t matter that we don’t fully understand the precursors because the neural network learned to find those connections itself,” Hassanzadeh said. “It learned which patterns were critical for extreme weather, and it used those to find the best analog.”

To test their concept, the team relied on data taken from realistic computer simulations. They originally reported early results with a convolutional neural network, but the team then shifted towards capsule neural networks. Convolutional neural networks are not able to recognize relative spatial relationships, but capsule neural networks can. These relative spatial relationships are important when it comes to the evolution of weather patterns. 

“The relative positions of pressure patterns, the highs and lows you see on weather maps, are the key factor in determining how weather evolves,” Hassanzadeh said.

Capsule neural networks also require less training data than convolutional neural networks. 

The team will continue to work on the system in order for it to be capable of being used in operational forecasting, but Hassanzadeh hopes that it eventually will lead to more accurate forecasts for extreme weather. 

“We are not suggesting that at the end of the day this is going to replace NWP,” he said. “But this might be a useful guide for NWP. Computationally, this could be a super cheap way to provide some guidance, an early warning, that allows you to focus NWP resources specifically where extreme weather is likely.”

“We want to leverage ideas from explainable AI (artificial intelligence) to interpret what the neural network is doing,” he said. “This might help us identify the precursors to extreme-causing weather patterns and improve our understanding of their physics.”

 

Spread the love
Continue Reading

Artificial Neural Networks

Moon Jellyfish and Neural Networks

Published

on

Moon Jellyfish and Neural Networks

Moon jellyfish (Aurelia aurita), which are present in almost all of the world’s oceans, are now being studied by researchers to learn how their neural networks function. By using their translucent bells that measure from three to 30 centimeters, the cnidarians are capable of moving around very efficiently. 

The lead author of the study is Fabian Pallasdies from the Neural Network Dynamics and computation research group at the Institute of Genetics at the University of Bonn

“These jellyfish have ring-shaped muscles that contract, thereby pushing the water out of the bell,” Pallasdies explains. 

The efficiency of their movements comes from the ability of the moon jellyfish to create vortices at the edge of their bell, in turn increasing propulsion. 

“Furthermore, only the contraction of the bell requires muscle power; the expansion happens automatically because the tissue is elastic and returns to its original shape,” continues Pallasdies. 

The group of scientists has now developed a mathematical model of the neural networks of moon jellyfish. It is used to investigate the neural networks and how they regulate the movement of the moon jellyfish.

Professor Dr. Raoul-Martin Memmesheimer is the head of the research group.

“Jellyfish are among the oldest and simplest organisms that move around in water,” he says.

The team will now look at the origins of its nervous system and other organisms. 

Jellyfish have been studied for decades, and extensive experimental neurophysiological data was collected between the 1950s and 1980s. The researchers at the University of Bonn used the data to develop their mathematical model. They studied individual nerve cells, nerve cell networks, the entire animal, and the surrounding water. 

“The model can be used to answer the question of how the excitation of individual nerve cells results in the movement of the moon jellyfish,” says Pallasdies.

Moon jellyfish are able to perceive their location through light stimuli and with a balance organ. The animal has ways of correcting itself when turned by the ocean current. This often involves compensating for the movement and going towards the water surface. The researchers confirmed through their mathematical model that the jellyfish use one neural network for swimming straight ahead and two for rotational movements. 

The activity of the nerve cells move throughout the jellyfish’s bell in a wave-like pattern, and the locomotion works even when large portions of the bell are injured. Scientists at the University of Bonn are now able to explain this with their simulations. 

“Jellyfish can pick up and transmit signals on their bell at any point,” says Pallasdies. “When one nerve cell fires, the others fire as well, even if sections of the bell are impaired.”

The moon jellyfish is the latest species of animals in which neural networks are being studied. The natural environment can provide many answers to new questions revolving around neural networks, artificial intelligence, robotics, and more. Currently, underwater robots are being developed based on the swimming principles of jellyfish.

“Perhaps our study can help to improve the autonomous control of these robots,” Pallasdies says.

The scientists hope that their research and ongoing work will help explain the early evolution of neural networks. 

 

Spread the love
Continue Reading