Connect with us

Artificial Neural Networks

NASA Currently Using A.I. for Space Science

Published

 on

NASA Currently Using A.I. for Space Science

In a statement released by NASA last month, the agency said that A.I. has the potential to help work on some of the biggest problems in space science. A.I. could be used to search for life on other planets or identify asteroids. NASA scientists are partnering up with leaders in the AI industry, like Intel, IBM, and Google. Together, they can apply advanced computer algorithms to solve some of those problems. 

There are certain A.I. technologies that NASA is relying on, such as machine learning, to interpret data. This data will then be collected by telescopes, including the James Webb Space Telescope or the Transiting Exoplanet Survey Satellite, at some point in the future.

Giada Arney, an astrobiologist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, hopes that machine learning can help her and her team find some indication of life in data that will be collected by the telescopes and observatories. 

“These technologies are very important, especially for big data sets and especially in the exoplanet field,” Arney said in the statement. “Because the data we’re going to get from future observations is going to be sparse and noisy. It’s going to be really hard to understand. So using these kinds of tools has so much potential to help us.”

NASA runs an eight-week program every summer that brings together leaders in the technology and space sectors, called Frontier Development (FDL).

Shawn Domagl-Goldman is a NASA Goddard astrobiologist. 

“FDL feels like some really good musicians with different instruments getting together for a jam session in the garage, finding something really cool, and saying, ‘Hey we’ve got a band here,'” he said in the statement.

Back in 2018, an FDL team was mentored by Domagal-Goldman and Arney, and they developed a machine learning technique that relies on neural networks. They analyze images and identify the chemistry of exoplanets by using the wavelengths of light emitted or absorbed by molecules in their atmosphere. 

By using this new technique, researchers could identify various molecules in the atmosphere of WASP-12b, an exoplanet. The technique did this more accurately than other methods. 

According to Domagal-Goldman, the neural network can also identify when there is a lack of data. The Bayesian technique, as it is called, can also tell scientists how certain it is about its prediction. 

“In places where the data weren’t good enough to give a really accurate result, this model was better at knowing that it wasn’t sure of the answer, which is really important if we are to trust these predictions,” Domagal-Goldman said.

The Bayesian technique is still being developed, but other FDL technologies are being used in the real-world. By 2017, a machine learning program was developed by FDL participants that was capable of quickly creating 3D models of nearby asteroids. It could also accurately estimate their shapes, sizes, and spin rates. This type of information is useful for NASA to detect and deflect asteroids that threaten Earth. 

Astronomers traditionally use simple computer software to create 3D models, and it analyzes radar measurements of a moving asteroid. It then provides useful information to help scientists infer its physical properties based on changes in the radar signal. 

Bill Diamond is SETI’s president and chief executive officer. 

“An adept astronomer with standard compute resources, could shape a single asteroid in one to three months,” Diamond said. “So the question for the research team was: Can we speed it up?”

The team consisting of students from France, South Africa and the United States, along with mentors from academia and technology company Nividia, developed an algorithm capable of rendering an asteroid in as little as four days. The technique is currently used by astronomers at the Arecibo Observatory in Puerto Rico, and it does real-time shape modeling of the asteroids. 

Researchers are also suggesting that A.I. technologies be built into future spacecraft, and that it would allow the spacecraft to make real-time decisions.

“A.I. methods will help us free up processing power from our own brains by doing a lot of the initial legwork on difficult tasks,” Arney said. “But these methods won’t replace humans any time soon, because we’ll still need to check the results.” 

 

Spread the love

Artificial Neural Networks

Deep Learning Used to Find Disease-Related Genes

Published

on

Deep Learning Used to Find Disease-Related Genes

A new study led by researchers at Linköping University demonstrates how an artificial neural network (ANN) can reveal large amounts of gene expression data, and it can lead to the discovery of groups of disease-related genes. The study was published in Nature Communications, and the scientists want the method to be applied within precision medicine and individualized treatment. 

Scientists are currently developing maps of biological networks that are based on how different proteins or genes interact with each other. The new study involves the use of artificial intelligence (AI) in order to find out if biological networks can be discovered through the use of deep learning. Artificial neural networks, which are trained by experimental data in the process of deep learning, are able to find patterns within massive amounts of complex data. Because of this, they are often used in applications such as image recognition. Even with its seemingly enormous potential, the use of this machine learning method has been limited within biological research. 

Sanjiv Dwivedi is a postdoc in the Department of Physics, Chemistry and Biology (IFM) at Linköping University.

“We have for the first time used deep learning to find disease-related genes. This is a very powerful method in the analysis of huge amounts of biological information, or ‘big data’,” says Dwivedi.

The scientists relied on a large database with information regarding the expression patterns of 20,000 genes in a large number of people. The artificial neural network was not told which gene expression patterns were from people with diseases, or which ones were from healthy individuals. The AI model was then trained to find patterns of gene expression.

One of the mysteries surrounding machine learning is that it is currently impossible to see how an artificial neural network gets to its final result. It is only possible to see the information that goes in and the information that is produced, but everything that happens in-between consists of several layers of mathematically processed information. These inner workings of an artificial neural network are not yet able to be deciphered. The scientists wanted to know if there were any similarities between the designs of the neural network and the familiar biological networks. 

Mike Gustafsson is a senior lecturer at IFM and leads the study. 

“When we analysed our neural network, it turned out that the first hidden layer represented to a large extent interactions between various proteins. Deeper in the model, in contrast, on the third level, we found groups of different cell types. It’s extremely interesting that this type of biologically relevant grouping is automatically produced, given that our network has started from unclassified gene expression data,” says Gustafsson.

The scientists then wanted to know if their model of gene expression was capable of being used to determine which gene expression patterns are associated with disease and which are normal. They were able to confirm that the model can discover relative patterns that agree with biological mechanisms in the body. Another discovery was that the artificial neural network could possibly discover brand new patterns since it was trained with unclassified data. The researchers will now investigate previously unknown patterns and whether they are relevant within biology. 

“We believe that the key to progress in the field is to understand the neural network. This can teach us new things about biological contexts, such as diseases in which many factors interact. And we believe that our method gives models that are easier to generalise and that can be used for many different types of biological information,” says Gustafsson.

Through collaborations with medical researchers, Gustafsson hopes to apply the method in precision medicine. This could help determine which specific types of medicine patients should receive.

The study was financially supported by the Swedish Foundation for Strategic Research (SSF) and the Swedish Research Council.

 

Spread the love
Continue Reading

Artificial Neural Networks

Deep Learning System Can Accurately Predict Extreme Weather

Published

on

Deep Learning System Can Accurately Predict Extreme Weather

Engineers at Rice University have developed a deep learning system that is capable of accurately predicting extreme weather events up to five days in advance. The system, which taught itself, only requires minimal information about current weather conditions in order to make the predictions.             

Part of the system’s training involves examining hundreds of pairs of maps, and each map indicates surface temperatures and air pressures at five-kilometers height. Those conditions are shown several days apart. The training also presents scenarios that produced extreme weather, such as hot and cold spells that can cause heat waves and winter storms. Upon completing the training, the deep learning system was able to make five-day forecasts of extreme weather based on maps it had not previously seen, with an accuracy rate of 85%.

According to Pedram Hassanzadeh, co-author of the study which was published online in the American Geophysical Union’s Journal of Advances in Modeling Earth Systems, the system could be used as a tool and act as an early warning for weather forecasters. It will be especially useful for learning more about certain atmospheric conditions that cause extreme weather scenarios. 

Because of the invention of computer-based numerical weather prediction (NWP) in the 1950s, day-to-day weather forecasts have continued to improve. However, NWP is not able to make reliable predictions about extreme weather events, such as heat waves. 

“It may be that we need faster supercomputers to solve the governing equations of the numerical weather prediction models at higher resolutions,” said Hassanzadeh, an assistant professor of mechanical engineering and of Earth, environmental and planetary sciences at Rice University. “But because we don’t fully understand the physics and precursor conditions of extreme-causing weather patterns, it’s also possible that the equations aren’t fully accurate, and they won’t produce better forecasts, no matter how much computing power we put in.”

In 2017, Hassanzadeh was joined by study co-authors and graduate students Ashesh Chattopadhyay and Ebrahim Nabizadeh. Together, they set out on a different path. 

“When you get these heat waves or cold spells, if you look at the weather map, you are often going to see some weird behavior in the jet stream, abnormal things like large waves or a big high-pressure system that is not moving at all,” Hassanzadeh said. “It seemed like this was a pattern recognition problem. So we decided to try to reformulate extreme weather forecasting as a pattern-recognition problem rather than a numerical problem.”

“We decided to train our model by showing it a lot of pressure patterns in the five kilometers above the Earth, and telling it, for each one, ‘This one didn’t cause extreme weather. This one caused a heat wave in California. This one didn’t cause anything. This one caused a cold spell in the Northeast,'” Hassanzadeh continued. “Not anything specific like Houston versus Dallas, but more of a sense of the regional area.”

Prior to computers, analog forecasting was used for weather prediction. It was done in a very similar way to the new system, but it was humans instead of computers. 

“One way prediction was done before computers is they would look at the pressure system pattern today, and then go to a catalog of previous patterns and compare and try to find an analog, a closely similar pattern,” Hassanzadeh said. “If that one led to rain over France after three days, the forecast would be for rain in France.”

Now, neural networks can learn on their own and do not necessarily need to rely on humans to find connections. 

“It didn’t matter that we don’t fully understand the precursors because the neural network learned to find those connections itself,” Hassanzadeh said. “It learned which patterns were critical for extreme weather, and it used those to find the best analog.”

To test their concept, the team relied on data taken from realistic computer simulations. They originally reported early results with a convolutional neural network, but the team then shifted towards capsule neural networks. Convolutional neural networks are not able to recognize relative spatial relationships, but capsule neural networks can. These relative spatial relationships are important when it comes to the evolution of weather patterns. 

“The relative positions of pressure patterns, the highs and lows you see on weather maps, are the key factor in determining how weather evolves,” Hassanzadeh said.

Capsule neural networks also require less training data than convolutional neural networks. 

The team will continue to work on the system in order for it to be capable of being used in operational forecasting, but Hassanzadeh hopes that it eventually will lead to more accurate forecasts for extreme weather. 

“We are not suggesting that at the end of the day this is going to replace NWP,” he said. “But this might be a useful guide for NWP. Computationally, this could be a super cheap way to provide some guidance, an early warning, that allows you to focus NWP resources specifically where extreme weather is likely.”

“We want to leverage ideas from explainable AI (artificial intelligence) to interpret what the neural network is doing,” he said. “This might help us identify the precursors to extreme-causing weather patterns and improve our understanding of their physics.”

 

Spread the love
Continue Reading

Artificial Neural Networks

Moon Jellyfish and Neural Networks

Published

on

Moon Jellyfish and Neural Networks

Moon jellyfish (Aurelia aurita), which are present in almost all of the world’s oceans, are now being studied by researchers to learn how their neural networks function. By using their translucent bells that measure from three to 30 centimeters, the cnidarians are capable of moving around very efficiently. 

The lead author of the study is Fabian Pallasdies from the Neural Network Dynamics and computation research group at the Institute of Genetics at the University of Bonn

“These jellyfish have ring-shaped muscles that contract, thereby pushing the water out of the bell,” Pallasdies explains. 

The efficiency of their movements comes from the ability of the moon jellyfish to create vortices at the edge of their bell, in turn increasing propulsion. 

“Furthermore, only the contraction of the bell requires muscle power; the expansion happens automatically because the tissue is elastic and returns to its original shape,” continues Pallasdies. 

The group of scientists has now developed a mathematical model of the neural networks of moon jellyfish. It is used to investigate the neural networks and how they regulate the movement of the moon jellyfish.

Professor Dr. Raoul-Martin Memmesheimer is the head of the research group.

“Jellyfish are among the oldest and simplest organisms that move around in water,” he says.

The team will now look at the origins of its nervous system and other organisms. 

Jellyfish have been studied for decades, and extensive experimental neurophysiological data was collected between the 1950s and 1980s. The researchers at the University of Bonn used the data to develop their mathematical model. They studied individual nerve cells, nerve cell networks, the entire animal, and the surrounding water. 

“The model can be used to answer the question of how the excitation of individual nerve cells results in the movement of the moon jellyfish,” says Pallasdies.

Moon jellyfish are able to perceive their location through light stimuli and with a balance organ. The animal has ways of correcting itself when turned by the ocean current. This often involves compensating for the movement and going towards the water surface. The researchers confirmed through their mathematical model that the jellyfish use one neural network for swimming straight ahead and two for rotational movements. 

The activity of the nerve cells move throughout the jellyfish’s bell in a wave-like pattern, and the locomotion works even when large portions of the bell are injured. Scientists at the University of Bonn are now able to explain this with their simulations. 

“Jellyfish can pick up and transmit signals on their bell at any point,” says Pallasdies. “When one nerve cell fires, the others fire as well, even if sections of the bell are impaired.”

The moon jellyfish is the latest species of animals in which neural networks are being studied. The natural environment can provide many answers to new questions revolving around neural networks, artificial intelligence, robotics, and more. Currently, underwater robots are being developed based on the swimming principles of jellyfish.

“Perhaps our study can help to improve the autonomous control of these robots,” Pallasdies says.

The scientists hope that their research and ongoing work will help explain the early evolution of neural networks. 

 

Spread the love
Continue Reading