Connect with us

Deep Learning

AI Based on Slow Brain Dynamics

Published

 on

AI Based on Slow Brain Dynamics

Scientists at Bar-llan University in Israel have used advanced experiments on neural cultures and large scale simulations to create a new ultrafast artificial intelligence. The new AI is based on the slow brain dynamics of humans. Those brain dynamics have better learning rates compared to the best learning algorithms that we have today. 

Machine learning is actually strongly related and based on the dynamics of our brains. With the speed of modern computers and their large data sets, we have been able to create deep learning algorithms that are similar to human experts in various different fields. However, these learning algorithms have different characteristics than human brains. 

The team of scientists at the university published their work in the journal Scientific Reports. They worked to connect neuroscience and advanced artificial intelligence algorithms, a field that has been abandoned for decades. 

Professor Ido Kanter of Bar-llan University’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Study, and the leading author of the study, commented on the two fields. 

“The current scientific and technological viewpoint is that neurobiology and machine learning are two distinct disciplines that advance independently,” he said. “The absence of expectedly reciprocal influence is puzzling.” 

“The number of neurons in a brain is less than the number of bits in a typical disc size of modern personal computers, and the computational speed of the brain is like the second hand on a clock, even slower than the first computer invented over 70 years ago,” he said. 

“In addition, the brain’s learning rules are very complicated and remote from the principles of learning steps in current artificial intelligence algorithms.” 

Professor Kanter works with a research team including Herut Uzan, Shira Sardi, Amir Goldental, and Roni Vardi. 

When it comes to brain dynamics, they deal with asynchronous inputs since physical reality changes and develops. Because of this, there is no synchronization for the nerve cells. This is different with artificial intelligence algorithms since they are based on synchronous inputs. Different inputs within the same frame and their timings are normally ignored. 

Professor Kanter went on to explain this dynamic. 

“When looking ahead one immediately observes a frame with multiple objects. For instance, while driving one observes cars, pedestrian crossings, and road signs, and can easily identify their temporal ordering and relative positions,” he said. “Biological hardware (learning rules) is designed to deal with asynchronous inputs and refine their relative information.” 

One of the points that this study makes is that ultrafast learning rates are about the same whether it’s a small or large network. According to the researchers, “the disadvantage of the complicated brain’s learning scheme is actually an advantage.” 

The study also shows that learning is able to take place without learning steps. It can be achieved through self-adaptation based on asynchronous inputs. In the human brain, this type of learning happens in the dendrites, which are short extensions of nerve cells, and different terminals of each neuron. This has been observed before. Previously, it was believed to be unimportant that network dynamics under dendeitic learning are controlled by weak weights. 

This new research and findings can mean a lot of different things. These efficient deep learning algorithms and their similarity to the very slow brain’s dynamics can help create a new class of advanced artificial intelligence with fast computers. 

The study also pushes for cooperation between the fields of neurobiology and artificial intelligence, which can help both fields advance further. According to the research group, “Insights of fundamental principles of our brain have to be once again at the center of future artificial intelligence.” 

 

Spread the love

Deep Learning Specialization on Coursera

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Deep Learning

A New AI System Could Create More Hope For People With Epilepsy

mm

Published

on

A New AI System Could Create More Hope For People With Epilepsy

As Endgadget reports, two AI researchers may have created a system that creates new hope for people suffering from epilepsy – a system “that can predict epileptic seizures with 99.6-percent accuracy,” and do it up to an hour before seizures occur.

This would not be the first new advancement, since previously researchers at Technical University (TU) in Eindhoven, Netherlands developed a smart arm bracelet that can predict epileptic seizures during nighttime. But the accuracy and short time-frame the new AI system can work on as IEEE Spectrum notes, gives more hope to around 50 million people around the world who suffer from epilepsy (based on the data from World Health Organization). Out of this number of patients, 70 percent of them can control their seizures with medication if taken on time.

The new AI system was created by Hisham Daoud and Magdy Bayoumi of the University of Louisiana at Lafayette, and is lauded as “a major leap forward from existing prediction methods.” As Hisham Daoud, one of the two researchers that developed the system explains, “Due to unexpected seizure times, epilepsy has a strong psychological and social effect on patients.”

As is explained, “each person exhibits unique brain patterns, which makes it hard to accurately predict seizures.” So far, the previously existing models predicted seizures “ in a two-stage process, where the brain patterns must be extracted manually and then a classification system is applied,” which, as Daoud explains, added to the time needed to make a seizure prediction.

In their approach explained in study published on 24 July in IEEE Transactions on Biomedical Circuits and Systems, “the features extraction and classification processes are combined into a single automated system, which enables earlier and more accurate seizure prediction.”

To further boost the accuracy of their system Daoud and Bayoumi “incorporated another classification approach whereby a deep learning algorithm extracts and analyzes the spatial-temporal features of the patient’s brain activity from different electrode locations, boosting the accuracy of their model.” Since “EEG readings can involve multiple ‘channels’ of electrical activity,” to speed up the prediction process, even more, the two researchers “applied an additional algorithm to identify the most appropriate predictive channels of electrical activity.”

The complete system was then tested on 22 patients at the Boston Children’s Hospital. While the sample size was small, the system proved to be very accurate (99.6%), and had “a low tendency for false positives, at 0.004 false alarms per hour.”

As Daoud explained the next step would be the development of a customized computer chip to process the algorithms.  “We are currently working on the design of efficient hardware [device] that deploys this algorithm, considering many issues like system size, power consumption, and latency to be suitable for practical application in a comfortable way to the patient.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Deep Learning

New Tool Can Show Researchers What GANs Leave Out Of An Image

mm

Published

on

New Tool Can Show Researchers What GANs Leave Out Of An Image

Recently, a team of researchers from the MIT-IBM Watson AI Lab created a method of displaying what a Generative Adversarial Network leaves out of an image when asked to generate images. The study was dubbed Seeing What a GAN Cannot Generate, and it was recently presented at the International Conference on Computer Vision.

Generative Adversarial Networks have become more robust, sophisticated, and widely used in the past few years. They’ve become quite good at rendering images full of detail, as long as that image is confined to a relatively small area. However, when GANs are used to generate images of larger scenes and environments, they tend not to perform as well. In scenarios where GANs are asked to render scenes full of many objects and items, like a busy street, GANs often leave many important aspects of the image out.

According to MIT News, the research was developed in part by David Bau, a graduate student at the Department of Electrical Engineering and Computer Science at MIT. Bau explained that researchers usually concentrate on refining what machine learning systems pay attention to and discerning how certain inputs can be mapped to certain outputs. However, Bau also explained that understanding what data is ignored by machine learning models if often just as important and that the research team hopes their tools will inspire researchers to pay attention to the ignored data.

Bau’s interest in GANs was spurred by the fact that they could be used to investigate the black-box nature of neural nets and to gain an intuition of how the networks might be reasoning. Bau previously worked on a tool that could identify specific clusters of artificial neurons, labeling them as being responsible for the representation of real-world objects such as books, clouds, and trees. Bau also had experience with a tool dubbed GANPaint, which enables artists to remove and add specific features from photos by using GANs. According to Bau, the GANPaint application revealed a potential problem with the GANs, a problem that became apparent when Bau analyzed the images. As Bau told MIT News:

“My advisor has always encouraged us to look beyond the numbers and scrutinize the actual images. When we looked, the phenomenon jumped right out: People were getting dropped out selectively.”

While machine learning systems are designed to extract patterns from images, they can also end up ignoring relevant patterns. Bau and other researchers experimented with training GANs on various indoor and outdoor scenes, but in all of the different types of scenes the GANs left out important details in the scenes like cars, road signs, people, bicycles, etc. This was true even when the objects left out were important to the scene in question.

The research team hypothesized that when the GAN is trained on images, the GAN may find it easier to capture the patterns of the image that are easier to represent, such as large stationary objects like landscapes and buildings. It learns these patterns over other, more difficult to interpret patterns, such as cars and people. It has been common knowledge that GANs often omit important, meaningful details when generating images, but the study from the MIT team may be the first time that GANs have been demonstrated omitting entire object classes within an image.

The research team notes that it is possible for GANs to achieve their numerical goals even when leaving out objects that humans care about when looking at images. If images generated by GANS are going to be used to train complex systems like autonomous vehicles, the image data should be closely scrutinized because there’s a real concern that critical objects like signs, people, and other cars could be left out of the images. Bau explained that their research shows why the performance of a model shouldn’t be based only on accuracy:

“We need to understand what the networks are and aren’t doing to make sure they are making the choices we want them to make.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Deep Learning

DeepMind’s AI Reaches Highest Rank of StarCraft II

Published

on

DeepMind’s AI Reaches Highest Rank of StarCraft II

DeepMind’s AlphaStar, an artificial intelligence (AI) system, has reached the highest level in StarCraft II, an extremely popular and complex computer game. The AI outperformed 99.8% of all registered human players.

It took the AI system 44 days of training to be able to reach the level. It used recordings of some of the best human players, and it learned from them until eventually going up against itself. 

“AlphaStar has become the first AI system to reach the top tier of human performance in any professionally played e-sport on the full unrestricted game under professionally approved conditions,” said David Silver, a researcher at DeepMind.

“Ever since computers cracked Go, chess and poker, the game of StarCraft has emerged, essentially by consensus from the community, as the next grand challenge for AI,” Silver said. “It’s considered to be the game which is most at the limit of human capabilities.”

The work was published in the scientific journal Nature

What is StarCraft?

Put simply, the point of StarCraft is to build civilizations and fight against aliens. 

It is a real-time strategy game where players control hundreds of units and have to make important economic decisions. Within the game, there are tens of thousands of time-steps and thousands of possible actions. These are selected in real-time throughout ten minutes of gameplay. 

AlphaStar “Agents”

DeepMind developed AlphaStar “Agents,” and they created one for each of the different races in the game. The different races each have a unique set of strengths and weaknesses. In the “AlphaStar league,” the AI competed against itself and “exploiter” agents which targeted the weaknesses of AlphaStar. 

One of the most impressive points of the AI was that it was not developed to perform actions at superhuman speed. Instead, it learned different winning strategies. 

Just like the StarCraft game, real-world applications require artificial agents to interact, compete, and coordinate within a complex environment containing other agents. This is why StarCraft has become such an important aspect of artificial intelligence research. 

Military Interest

Perhaps one of the more unexpected aspects of this work is that it’ll be of interest to the military. 

“Military analysts will certainly be eyeing the successful AlphaStar real-time strategies as a clear example of the advantages of AI for battlefield planning. But this is an extremely dangerous idea with the potential for humanitarian disaster. AlphaStar learns strategy from big data in one particular environment. The data from conflicts such as Syria and Yemen would be too sparse to be of use,” said Noel Sharkey, a professor of AI and robotics at the University of Sheffield.

“And as DeepMind explained at a recent United Nations event, such methods would be highly dangerous for weapons control as the moves are unpredictable and can be creative in unexpected ways. This is against the laws that govern armed conflict.”

Coming a Long Way in Short Time

Back in January, professional StarCraft II player Grzegorz Komincz, defeated AlphaStar in the game. It was a huge set back for Google, who had invested millions of dollars into the technology. Since then, DeepMind’s AI has come a long way in a short amount of time, and these new developments have huge implications.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading