Connect with us

Brain Machine Interface

AI Used To Recreate Human Brain Waves In Real Time

mm

Published

 on

AI Used To Recreate Human Brain Waves In Real Time

Recently, a team of researchers created a neural network that is able to recreate human brain waves in real-time. As reported by Futurism, the research team, comprised of researchers from the Moscow Institute of Physics and Technology (MIPT) and the Neurobotics corporation, were able to visualize a  person’s brain waves by translating the waves with a computer vision neural network, rendering them as images.

The results of the study were published in bioRxiv, and a video was posted alongside the research paper, which showed how the network reconstructed images. The MIPT research team hopes that the study will help them create post-stroke rehabilitation systems that are controlled by brain waves. In order to create rehabilitative devices for stroke victims, neurobiologists have to study the processes the brain uses to encode information. A critical part of understanding these processes is studying how people perceive video information. According to ZME Science, the current methods of extracting images from brain waves typically analyze the signals originating from the neurons, through the use of implants, or extract images using functional MRI.

The research team from Neurbiotics and MIPT utilized electroencephalography, or EEG, which logs brain waves collected from electrodes placed on the scalp.  In such situations, people often wear devices that track their neural signals while they watch a video or look at pictures. The analysis of brain activity yielded input features that could be used in a machine learning system. The machine learning system was able to reconstruct the images a person witnessed, rendering the images on a screen in real-time.

The experiment was divided into multiple parts. In the experiment’s first phase, the researchers had the subjects watch 10-second clips of YouTube videos for around 20 minutes. There were five different categories that the video were divided into: motorsports, human faces, abstract shapes, waterfalls and moving mechanisms. These different categories can contain a variety of objects. For example, the motorsports category contained clips of snowmobiles and motorcycles.

The research team analyzed the EEG data that was collected while the participants watched the videos. The EEGs displayed specific patterns for each of the different video clips, and this meant that the team could potentially interpret what content the participants were seeing on videos in more or less real-time.

The second phase of the experiment had three categories selected at random. Two neural networks were created to work with these two categories. The first network generated random images that belonged to one of three categories, creating them out of random noise that was refined into an image. Meanwhile, the other network generated noise based on the EEG scans. The data in both of the networks were compared and the randomly generated images were updated based on the EEG noise data, until the generated images became similar to the images that the test subjects were seeing.

After the system had been designed, the researchers tested the program’s ability to visualize brain waves by showing the test subjects videos they hadn’t yet seen from the same categories. The EEGs generated during the second round of viewings were given to the networks, and the networks were able to generate images that could be easily placed into the right category 90% of the time.

The researchers noted that the results of their experiment were surprising because for a long time it was assumed that there wasn’t sufficient information in an EEG to reconstruct the images observed by people. However, the results of the research team proved that it can be done.

Vladimir Konyshev, the head of the Neurorobotics Lab at MIPT, explained that although the research team is currently focused on creating assistive technologies for those who are disabled, the technology they are working could be used to create neural control devices for the general population at some point. Konyshev explained to TechXplore:

“We’re working on the Assistive Technologies project of Neuronet of the National Technology Initiative, which focuses on the brain-computer interface that enables post-stroke patients to control an exoskeleton arm for neurorehabilitation purposes, or paralyzed patients to drive an electric wheelchair, for example. The ultimate goal is to increase the accuracy of neural control for healthy individuals, too.”

Spread the love

Artificial Neural Networks

Deep Learning System Can Accurately Predict Extreme Weather

Published

on

Deep Learning System Can Accurately Predict Extreme Weather

Engineers at Rice University have developed a deep learning system that is capable of accurately predicting extreme weather events up to five days in advance. The system, which taught itself, only requires minimal information about current weather conditions in order to make the predictions.             

Part of the system’s training involves examining hundreds of pairs of maps, and each map indicates surface temperatures and air pressures at five-kilometers height. Those conditions are shown several days apart. The training also presents scenarios that produced extreme weather, such as hot and cold spells that can cause heat waves and winter storms. Upon completing the training, the deep learning system was able to make five-day forecasts of extreme weather based on maps it had not previously seen, with an accuracy rate of 85%.

According to Pedram Hassanzadeh, co-author of the study which was published online in the American Geophysical Union’s Journal of Advances in Modeling Earth Systems, the system could be used as a tool and act as an early warning for weather forecasters. It will be especially useful for learning more about certain atmospheric conditions that cause extreme weather scenarios. 

Because of the invention of computer-based numerical weather prediction (NWP) in the 1950s, day-to-day weather forecasts have continued to improve. However, NWP is not able to make reliable predictions about extreme weather events, such as heat waves. 

“It may be that we need faster supercomputers to solve the governing equations of the numerical weather prediction models at higher resolutions,” said Hassanzadeh, an assistant professor of mechanical engineering and of Earth, environmental and planetary sciences at Rice University. “But because we don’t fully understand the physics and precursor conditions of extreme-causing weather patterns, it’s also possible that the equations aren’t fully accurate, and they won’t produce better forecasts, no matter how much computing power we put in.”

In 2017, Hassanzadeh was joined by study co-authors and graduate students Ashesh Chattopadhyay and Ebrahim Nabizadeh. Together, they set out on a different path. 

“When you get these heat waves or cold spells, if you look at the weather map, you are often going to see some weird behavior in the jet stream, abnormal things like large waves or a big high-pressure system that is not moving at all,” Hassanzadeh said. “It seemed like this was a pattern recognition problem. So we decided to try to reformulate extreme weather forecasting as a pattern-recognition problem rather than a numerical problem.”

“We decided to train our model by showing it a lot of pressure patterns in the five kilometers above the Earth, and telling it, for each one, ‘This one didn’t cause extreme weather. This one caused a heat wave in California. This one didn’t cause anything. This one caused a cold spell in the Northeast,'” Hassanzadeh continued. “Not anything specific like Houston versus Dallas, but more of a sense of the regional area.”

Prior to computers, analog forecasting was used for weather prediction. It was done in a very similar way to the new system, but it was humans instead of computers. 

“One way prediction was done before computers is they would look at the pressure system pattern today, and then go to a catalog of previous patterns and compare and try to find an analog, a closely similar pattern,” Hassanzadeh said. “If that one led to rain over France after three days, the forecast would be for rain in France.”

Now, neural networks can learn on their own and do not necessarily need to rely on humans to find connections. 

“It didn’t matter that we don’t fully understand the precursors because the neural network learned to find those connections itself,” Hassanzadeh said. “It learned which patterns were critical for extreme weather, and it used those to find the best analog.”

To test their concept, the team relied on data taken from realistic computer simulations. They originally reported early results with a convolutional neural network, but the team then shifted towards capsule neural networks. Convolutional neural networks are not able to recognize relative spatial relationships, but capsule neural networks can. These relative spatial relationships are important when it comes to the evolution of weather patterns. 

“The relative positions of pressure patterns, the highs and lows you see on weather maps, are the key factor in determining how weather evolves,” Hassanzadeh said.

Capsule neural networks also require less training data than convolutional neural networks. 

The team will continue to work on the system in order for it to be capable of being used in operational forecasting, but Hassanzadeh hopes that it eventually will lead to more accurate forecasts for extreme weather. 

“We are not suggesting that at the end of the day this is going to replace NWP,” he said. “But this might be a useful guide for NWP. Computationally, this could be a super cheap way to provide some guidance, an early warning, that allows you to focus NWP resources specifically where extreme weather is likely.”

“We want to leverage ideas from explainable AI (artificial intelligence) to interpret what the neural network is doing,” he said. “This might help us identify the precursors to extreme-causing weather patterns and improve our understanding of their physics.”

 

Spread the love
Continue Reading

Artificial Neural Networks

Moon Jellyfish and Neural Networks

Published

on

Moon Jellyfish and Neural Networks

Moon jellyfish (Aurelia aurita), which are present in almost all of the world’s oceans, are now being studied by researchers to learn how their neural networks function. By using their translucent bells that measure from three to 30 centimeters, the cnidarians are capable of moving around very efficiently. 

The lead author of the study is Fabian Pallasdies from the Neural Network Dynamics and computation research group at the Institute of Genetics at the University of Bonn

“These jellyfish have ring-shaped muscles that contract, thereby pushing the water out of the bell,” Pallasdies explains. 

The efficiency of their movements comes from the ability of the moon jellyfish to create vortices at the edge of their bell, in turn increasing propulsion. 

“Furthermore, only the contraction of the bell requires muscle power; the expansion happens automatically because the tissue is elastic and returns to its original shape,” continues Pallasdies. 

The group of scientists has now developed a mathematical model of the neural networks of moon jellyfish. It is used to investigate the neural networks and how they regulate the movement of the moon jellyfish.

Professor Dr. Raoul-Martin Memmesheimer is the head of the research group.

“Jellyfish are among the oldest and simplest organisms that move around in water,” he says.

The team will now look at the origins of its nervous system and other organisms. 

Jellyfish have been studied for decades, and extensive experimental neurophysiological data was collected between the 1950s and 1980s. The researchers at the University of Bonn used the data to develop their mathematical model. They studied individual nerve cells, nerve cell networks, the entire animal, and the surrounding water. 

“The model can be used to answer the question of how the excitation of individual nerve cells results in the movement of the moon jellyfish,” says Pallasdies.

Moon jellyfish are able to perceive their location through light stimuli and with a balance organ. The animal has ways of correcting itself when turned by the ocean current. This often involves compensating for the movement and going towards the water surface. The researchers confirmed through their mathematical model that the jellyfish use one neural network for swimming straight ahead and two for rotational movements. 

The activity of the nerve cells move throughout the jellyfish’s bell in a wave-like pattern, and the locomotion works even when large portions of the bell are injured. Scientists at the University of Bonn are now able to explain this with their simulations. 

“Jellyfish can pick up and transmit signals on their bell at any point,” says Pallasdies. “When one nerve cell fires, the others fire as well, even if sections of the bell are impaired.”

The moon jellyfish is the latest species of animals in which neural networks are being studied. The natural environment can provide many answers to new questions revolving around neural networks, artificial intelligence, robotics, and more. Currently, underwater robots are being developed based on the swimming principles of jellyfish.

“Perhaps our study can help to improve the autonomous control of these robots,” Pallasdies says.

The scientists hope that their research and ongoing work will help explain the early evolution of neural networks. 

 

Spread the love
Continue Reading

Artificial Neural Networks

Amazon Creates New Tool To Engineer AI Models With Just A Few Lines Of Code

mm

Published

on

Amazon Creates New Tool To Engineer AI Models With Just A Few Lines Of Code

As efforts to make machine learning easier more accessible increase, different companies are creating tools to make the creation and optimization of deep learning models simpler. As VentureBeat reports, Amazon launched a new tool designed to help create and modify machine learning models in just a few lines of code.

Carrying out machine learning on a dataset is often a long, complex task. The data must be transformed and preprocessed, and then the proper model must be created and customized. Tweaking the hyperparameters of a model and then retraining can take a long time, and to help solve issues like this Amazon has launched AutoGluon. AutoGluon is an attempt to automate much of the overhead that typically comes with the creation of a machine learning system. For instance, not only do machine learning engineers have to decide on an appropriate architecture, they also need to experiment with the hyperparameters of the model. AutoGluon endeavors to make both the creation of the neural net architecture and the selection of appropriate hyperparameters easier.

AutoGluon is based on work initially begun by Microsoft and Amazon in 2017. The original Gluon was a machine learning interface designed to let developers mix and matched optimized components to create their own models, but AutoGluon just creates a model end-to-end, based on the desires of the user. AutoGluon is reportedly capable of producing a model and selecting the hyperparameters for the model, within a range of specified choices, with as few as three lines of code. The developer only has to provide a few arguments like their desired training completion time, and AutoGluon will calculate the best model that will complete within the specified runtime and given the available computation resources.

AutoGluon is currently capable of creating models for image classification, text classification, object detection, and tabular prediction. AutoGluon’s API is also intended to allow more experienced developers to be able to customize the auto-generated model and improve performance. At the moment, AutoGluon is only available for Linux and it requires Python 3.6 or 3.7.

Jonas Mueller, part of the AutoGluon development team, explained the reasoning behind the creation of AutoGluon:

“We developed AutoGluon to truly democratize machine learning, and make the power of deep learning available to all developers. AutoGluon solves this problem as all choices are automatically tuned within default ranges that are known to perform well for the particular task and model.”

AutoGluon is a new method within a long line of methods intended to reduce the expertise and time needed to train machine learning models. Software libraries like Theano automated the calculation of gradient vectors, while Keras let developers easily specify certain desired hyperparameters. Amazon believes that there is still more ground that can be covered when it comes to democratizing machine learning, like making data pre-processing and hyperparameter tuning simpler.

The creation of AutoGluon seems to be part of an effort by Amazon to make training and deploying machine learning systems easier and more accessible. Amazon has also made machine learning-centric changes to its AWS suite. For example, upgrades have been made to the AWS Sagemaker toolkit. The AWS SageMaker toolkit within the AWS suite lets developers train and deploy models to the cloud. SageMaker comes with a variety of tools that let developers automatically choose algorithms, train and validate models, and improve the accuracy of models.

Spread the love
Continue Reading