Connect with us

Artificial Neural Networks

Dragonflies and Missile Defense Systems

Published

 on

Dragonflies and Missile Defense Systems

Dragonflies have extremely fast reflexes with little depth perception. Their reaction time to prey that is moving through the air or ground is 50 milliseconds, the same amount of time it takes for information to cross three neurons. Sandia National Laboratories is working on research to figure out how dragonfly brains work and learn the ways they are able to calculate complex trajectories. 

The research is led by computational neuroscientist Frances Chance. She is the one responsible for developing the algorithms, and she will be presenting her research at the International Conference on Neuromorphic Systems in Knoxville, Tennessee. The research has already been presented at the Annual Meeting of the Organization for Computational Neurosciences in Barcelona, Spain. 

Frances Chance specializes in replicating biological neural networks like brains, especially neurons and the process of sending information throughout the nervous system. Brains are more complex and better versions of computers. They are more energy efficient while leaning and adapting at a faster speed. 

“I try to predict how neurons are wired in the brain and understand what kinds of computations those neurons are doing, based on what we know about the behavior of the animal or what we know about the neural responses,” Chance said. 

The research conducted by Sandia National Laboratories included creating a simple environment that had generated dragonflies through computer simulations. They used computer algorithms to make the dragonflies catch prey just like their real-life counterparts. The computer simulated dragonflies were able to process visual information while hunting just like dragonflies in the real environment. This showed that programming in this manner is possible, which could be applied in many different sectors. 

The new research is already being applied to the missile defense sector. Using the same system as the one with the computer simulated dragonflies could improve missile defense systems. Missile defense systems work in a similar way as dragonflies targeting and catching prey. They intercept an object in flight like a dragonfly intercepts prey in the environment. Dragonflies are one of the top predators in the world as they catch 95% of the prey they target.

With these new developments, they are trying to make on-board computers on missile defense systems smaller while still being fast and accurate. The current way missile defense systems work is through established intercept techniques that require a heavy computation load. This is one of the areas a model based on dragonflies and their prey can help. 

The new technology and research could help improve missile defense systems in many ways including reducing the size, weight, and power needs of onboard computers. Then, interceptors could become smaller and lighter which will make it much easier for them to move around. The new systems could also learn new ways to intercept moving targets like hypersonic weapons. Unlike ballistic missiles, these targets do not follow a similar predictive trajectory or pattern. Finally, the system could be able to use simpler sensors rather than the complex ones used now to intercept a target. 

One of the problems with this research and the idea is that missiles and dragonflies travel at very different speeds. This could cause some discrepancies

Outside of missile defense systems, the computation model of dragonfly brains could also help develop better machine learning and artificial intelligence. As the use of this kind of technology and artificial intelligence grows, it is finding its way into more and more sectors. The defense sector is one that is using this to become much more efficient and grow rapidly. This research shows how we can develop complex systems based on those that already exist in our environment, among those are dragonflies and their brains. Our new technology allows us to model this and create a better version.

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Artificial Neural Networks

Deep Learning System Can Accurately Predict Extreme Weather

Published

on

Deep Learning System Can Accurately Predict Extreme Weather

Engineers at Rice University have developed a deep learning system that is capable of accurately predicting extreme weather events up to five days in advance. The system, which taught itself, only requires minimal information about current weather conditions in order to make the predictions.             

Part of the system’s training involves examining hundreds of pairs of maps, and each map indicates surface temperatures and air pressures at five-kilometers height. Those conditions are shown several days apart. The training also presents scenarios that produced extreme weather, such as hot and cold spells that can cause heat waves and winter storms. Upon completing the training, the deep learning system was able to make five-day forecasts of extreme weather based on maps it had not previously seen, with an accuracy rate of 85%.

According to Pedram Hassanzadeh, co-author of the study which was published online in the American Geophysical Union’s Journal of Advances in Modeling Earth Systems, the system could be used as a tool and act as an early warning for weather forecasters. It will be especially useful for learning more about certain atmospheric conditions that cause extreme weather scenarios. 

Because of the invention of computer-based numerical weather prediction (NWP) in the 1950s, day-to-day weather forecasts have continued to improve. However, NWP is not able to make reliable predictions about extreme weather events, such as heat waves. 

“It may be that we need faster supercomputers to solve the governing equations of the numerical weather prediction models at higher resolutions,” said Hassanzadeh, an assistant professor of mechanical engineering and of Earth, environmental and planetary sciences at Rice University. “But because we don’t fully understand the physics and precursor conditions of extreme-causing weather patterns, it’s also possible that the equations aren’t fully accurate, and they won’t produce better forecasts, no matter how much computing power we put in.”

In 2017, Hassanzadeh was joined by study co-authors and graduate students Ashesh Chattopadhyay and Ebrahim Nabizadeh. Together, they set out on a different path. 

“When you get these heat waves or cold spells, if you look at the weather map, you are often going to see some weird behavior in the jet stream, abnormal things like large waves or a big high-pressure system that is not moving at all,” Hassanzadeh said. “It seemed like this was a pattern recognition problem. So we decided to try to reformulate extreme weather forecasting as a pattern-recognition problem rather than a numerical problem.”

“We decided to train our model by showing it a lot of pressure patterns in the five kilometers above the Earth, and telling it, for each one, ‘This one didn’t cause extreme weather. This one caused a heat wave in California. This one didn’t cause anything. This one caused a cold spell in the Northeast,'” Hassanzadeh continued. “Not anything specific like Houston versus Dallas, but more of a sense of the regional area.”

Prior to computers, analog forecasting was used for weather prediction. It was done in a very similar way to the new system, but it was humans instead of computers. 

“One way prediction was done before computers is they would look at the pressure system pattern today, and then go to a catalog of previous patterns and compare and try to find an analog, a closely similar pattern,” Hassanzadeh said. “If that one led to rain over France after three days, the forecast would be for rain in France.”

Now, neural networks can learn on their own and do not necessarily need to rely on humans to find connections. 

“It didn’t matter that we don’t fully understand the precursors because the neural network learned to find those connections itself,” Hassanzadeh said. “It learned which patterns were critical for extreme weather, and it used those to find the best analog.”

To test their concept, the team relied on data taken from realistic computer simulations. They originally reported early results with a convolutional neural network, but the team then shifted towards capsule neural networks. Convolutional neural networks are not able to recognize relative spatial relationships, but capsule neural networks can. These relative spatial relationships are important when it comes to the evolution of weather patterns. 

“The relative positions of pressure patterns, the highs and lows you see on weather maps, are the key factor in determining how weather evolves,” Hassanzadeh said.

Capsule neural networks also require less training data than convolutional neural networks. 

The team will continue to work on the system in order for it to be capable of being used in operational forecasting, but Hassanzadeh hopes that it eventually will lead to more accurate forecasts for extreme weather. 

“We are not suggesting that at the end of the day this is going to replace NWP,” he said. “But this might be a useful guide for NWP. Computationally, this could be a super cheap way to provide some guidance, an early warning, that allows you to focus NWP resources specifically where extreme weather is likely.”

“We want to leverage ideas from explainable AI (artificial intelligence) to interpret what the neural network is doing,” he said. “This might help us identify the precursors to extreme-causing weather patterns and improve our understanding of their physics.”

 

Spread the love
Continue Reading

Artificial Neural Networks

Moon Jellyfish and Neural Networks

Published

on

Moon Jellyfish and Neural Networks

Moon jellyfish (Aurelia aurita), which are present in almost all of the world’s oceans, are now being studied by researchers to learn how their neural networks function. By using their translucent bells that measure from three to 30 centimeters, the cnidarians are capable of moving around very efficiently. 

The lead author of the study is Fabian Pallasdies from the Neural Network Dynamics and computation research group at the Institute of Genetics at the University of Bonn

“These jellyfish have ring-shaped muscles that contract, thereby pushing the water out of the bell,” Pallasdies explains. 

The efficiency of their movements comes from the ability of the moon jellyfish to create vortices at the edge of their bell, in turn increasing propulsion. 

“Furthermore, only the contraction of the bell requires muscle power; the expansion happens automatically because the tissue is elastic and returns to its original shape,” continues Pallasdies. 

The group of scientists has now developed a mathematical model of the neural networks of moon jellyfish. It is used to investigate the neural networks and how they regulate the movement of the moon jellyfish.

Professor Dr. Raoul-Martin Memmesheimer is the head of the research group.

“Jellyfish are among the oldest and simplest organisms that move around in water,” he says.

The team will now look at the origins of its nervous system and other organisms. 

Jellyfish have been studied for decades, and extensive experimental neurophysiological data was collected between the 1950s and 1980s. The researchers at the University of Bonn used the data to develop their mathematical model. They studied individual nerve cells, nerve cell networks, the entire animal, and the surrounding water. 

“The model can be used to answer the question of how the excitation of individual nerve cells results in the movement of the moon jellyfish,” says Pallasdies.

Moon jellyfish are able to perceive their location through light stimuli and with a balance organ. The animal has ways of correcting itself when turned by the ocean current. This often involves compensating for the movement and going towards the water surface. The researchers confirmed through their mathematical model that the jellyfish use one neural network for swimming straight ahead and two for rotational movements. 

The activity of the nerve cells move throughout the jellyfish’s bell in a wave-like pattern, and the locomotion works even when large portions of the bell are injured. Scientists at the University of Bonn are now able to explain this with their simulations. 

“Jellyfish can pick up and transmit signals on their bell at any point,” says Pallasdies. “When one nerve cell fires, the others fire as well, even if sections of the bell are impaired.”

The moon jellyfish is the latest species of animals in which neural networks are being studied. The natural environment can provide many answers to new questions revolving around neural networks, artificial intelligence, robotics, and more. Currently, underwater robots are being developed based on the swimming principles of jellyfish.

“Perhaps our study can help to improve the autonomous control of these robots,” Pallasdies says.

The scientists hope that their research and ongoing work will help explain the early evolution of neural networks. 

 

Spread the love
Continue Reading

Artificial Neural Networks

Amazon Creates New Tool To Engineer AI Models With Just A Few Lines Of Code

mm

Published

on

Amazon Creates New Tool To Engineer AI Models With Just A Few Lines Of Code

As efforts to make machine learning easier more accessible increase, different companies are creating tools to make the creation and optimization of deep learning models simpler. As VentureBeat reports, Amazon launched a new tool designed to help create and modify machine learning models in just a few lines of code.

Carrying out machine learning on a dataset is often a long, complex task. The data must be transformed and preprocessed, and then the proper model must be created and customized. Tweaking the hyperparameters of a model and then retraining can take a long time, and to help solve issues like this Amazon has launched AutoGluon. AutoGluon is an attempt to automate much of the overhead that typically comes with the creation of a machine learning system. For instance, not only do machine learning engineers have to decide on an appropriate architecture, they also need to experiment with the hyperparameters of the model. AutoGluon endeavors to make both the creation of the neural net architecture and the selection of appropriate hyperparameters easier.

AutoGluon is based on work initially begun by Microsoft and Amazon in 2017. The original Gluon was a machine learning interface designed to let developers mix and matched optimized components to create their own models, but AutoGluon just creates a model end-to-end, based on the desires of the user. AutoGluon is reportedly capable of producing a model and selecting the hyperparameters for the model, within a range of specified choices, with as few as three lines of code. The developer only has to provide a few arguments like their desired training completion time, and AutoGluon will calculate the best model that will complete within the specified runtime and given the available computation resources.

AutoGluon is currently capable of creating models for image classification, text classification, object detection, and tabular prediction. AutoGluon’s API is also intended to allow more experienced developers to be able to customize the auto-generated model and improve performance. At the moment, AutoGluon is only available for Linux and it requires Python 3.6 or 3.7.

Jonas Mueller, part of the AutoGluon development team, explained the reasoning behind the creation of AutoGluon:

“We developed AutoGluon to truly democratize machine learning, and make the power of deep learning available to all developers. AutoGluon solves this problem as all choices are automatically tuned within default ranges that are known to perform well for the particular task and model.”

AutoGluon is a new method within a long line of methods intended to reduce the expertise and time needed to train machine learning models. Software libraries like Theano automated the calculation of gradient vectors, while Keras let developers easily specify certain desired hyperparameters. Amazon believes that there is still more ground that can be covered when it comes to democratizing machine learning, like making data pre-processing and hyperparameter tuning simpler.

The creation of AutoGluon seems to be part of an effort by Amazon to make training and deploying machine learning systems easier and more accessible. Amazon has also made machine learning-centric changes to its AWS suite. For example, upgrades have been made to the AWS Sagemaker toolkit. The AWS SageMaker toolkit within the AWS suite lets developers train and deploy models to the cloud. SageMaker comes with a variety of tools that let developers automatically choose algorithms, train and validate models, and improve the accuracy of models.

Spread the love
Continue Reading