Connect with us

Deep Learning

AI Based on Slow Brain Dynamics

Published

 on

AI Based on Slow Brain Dynamics

Scientists at Bar-llan University in Israel have used advanced experiments on neural cultures and large scale simulations to create a new ultrafast artificial intelligence. The new AI is based on the slow brain dynamics of humans. Those brain dynamics have better learning rates compared to the best learning algorithms that we have today. 

Machine learning is actually strongly related and based on the dynamics of our brains. With the speed of modern computers and their large data sets, we have been able to create deep learning algorithms that are similar to human experts in various different fields. However, these learning algorithms have different characteristics than human brains. 

The team of scientists at the university published their work in the journal Scientific Reports. They worked to connect neuroscience and advanced artificial intelligence algorithms, a field that has been abandoned for decades. 

Professor Ido Kanter of Bar-llan University’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Study, and the leading author of the study, commented on the two fields. 

“The current scientific and technological viewpoint is that neurobiology and machine learning are two distinct disciplines that advance independently,” he said. “The absence of expectedly reciprocal influence is puzzling.” 

“The number of neurons in a brain is less than the number of bits in a typical disc size of modern personal computers, and the computational speed of the brain is like the second hand on a clock, even slower than the first computer invented over 70 years ago,” he said. 

“In addition, the brain’s learning rules are very complicated and remote from the principles of learning steps in current artificial intelligence algorithms.” 

Professor Kanter works with a research team including Herut Uzan, Shira Sardi, Amir Goldental, and Roni Vardi. 

When it comes to brain dynamics, they deal with asynchronous inputs since physical reality changes and develops. Because of this, there is no synchronization for the nerve cells. This is different with artificial intelligence algorithms since they are based on synchronous inputs. Different inputs within the same frame and their timings are normally ignored. 

Professor Kanter went on to explain this dynamic. 

“When looking ahead one immediately observes a frame with multiple objects. For instance, while driving one observes cars, pedestrian crossings, and road signs, and can easily identify their temporal ordering and relative positions,” he said. “Biological hardware (learning rules) is designed to deal with asynchronous inputs and refine their relative information.” 

One of the points that this study makes is that ultrafast learning rates are about the same whether it’s a small or large network. According to the researchers, “the disadvantage of the complicated brain’s learning scheme is actually an advantage.” 

The study also shows that learning is able to take place without learning steps. It can be achieved through self-adaptation based on asynchronous inputs. In the human brain, this type of learning happens in the dendrites, which are short extensions of nerve cells, and different terminals of each neuron. This has been observed before. Previously, it was believed to be unimportant that network dynamics under dendeitic learning are controlled by weak weights. 

This new research and findings can mean a lot of different things. These efficient deep learning algorithms and their similarity to the very slow brain’s dynamics can help create a new class of advanced artificial intelligence with fast computers. 

The study also pushes for cooperation between the fields of neurobiology and artificial intelligence, which can help both fields advance further. According to the research group, “Insights of fundamental principles of our brain have to be once again at the center of future artificial intelligence.” 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Deep Learning

Uber’s Fiber Is A New Distributed AI Model Training Framework

mm

Published

on

Uber's Fiber Is A New Distributed AI Model Training Framework

According to VentureBeat, AI researchers at Uber have recently posted a paper to Arxiv outlining a new platform intended to assist in the creation of distributed AI models. The platform is called Fiber, and it can be used to drive both reinforcement learning tasks and population-based learning. Fiber is designed to make large-scale parallel computation more accessible to non-experts, letting them take advantage of the power of distributed AI algorithms and models.

Fiber has recently been made open-source on GitHub, and it’s compatible with Python 3.6 or above, with Kubernetes running on a Linux system and running in a cloud environment. According to the team of researchers, the platform is capable of easily scaling up to hundreds or thousands of individual machines.

The team of researchers from Uber explains that many of the most recent and relevant advances in artificial intelligence have been driven by larger models and more algorithms that are trained using distributed training techniques. However, creating population-based models and reinforcement models remains a difficult task for distributed training schemes, as they frequently have issues with efficiency and flexibility. Fiber makes the distributed system more reliable and flexible by combining cluster management software with dynamic scaling and letting users move their jobs from one machine to a large number of machines seamlessly.

Fiber is made out of three different components: an API, a backend, and a cluster layer. The API layer enables users to create things like queues, managers, and processes. The backend layer of Fiber lets the user create and terminate jobs that are being managed by different clusters, and the cluster layer manages the individual clusters themselves along with their resources, which greatly the number of items that Fiber has to keep tabs on.

Fiber enables jobs to be queued and run remotely on one local machine or many different machines, utilizing the concept of job-backed processes. Fiber also makes use of containers to ensure things like input data and dependent packages are self-contained. The Fiber framework even includes built-in error handling so that if a worker crashes it can be quickly revived. FIber is able to do all of this while interacting with cluster managers, letting Fiber apps run as if they were normal apps running on a given computer cluster.

Experimental results showed that on average Fiber’s response time was a few milliseconds and that it also scaled up better than baseline AI techniques when built with 2,048 processor cores/workers. The length of time required to complete jobs decreased gradually as the set number of workers increased. IPyParallel completed 50 iterations of training in approximately 1400 seconds, while Fiber was able to complete the same 50 iterations of training in approximately 50 seconds with 512 workers available.

The coauthors of the Fiber paper explain that Fiber is able to do achieve multiple goals like dynamically scaling algorithms and using large volumes of computing power:

“[Our work shows] that Fiber achieves many goals, including efficiently leveraging a large amount of heterogeneous computing hardware, dynamically scaling algorithms to improve resource usage efficiency, reducing the engineering burden required to make [reinforcement learning] and population-based algorithms work on computer clusters, and quickly adapting to different computing environments to improve research efficiency. We expect it will further enable progress in solving hard [reinforcement learning] problems with [reinforcement learning] algorithms and population-based methods by making it easier to develop these methods and train them at the scales necessary to truly see them shine.”

Spread the love
Continue Reading

Deep Learning

Researchers Develop Computer Algorithm Inspired by Mammalian Olfactory System

Published

on

Researchers Develop Computer Algorithm Inspired by Mammalian Olfactory System

Researchers from Cornell University have created a computer algorithm inspired by the mammalian olfactory system. Scientists have long sought out explanations of how mammals learn and identify smells. The new algorithm provides insight into the workings of the brain, and applying it to a computer chip allows it to quickly and reliably learn patterns better than current machine learning models. 

Thomas Cleland is a professor of psychology and senior author of the study titled “Rapid Learning and Robust Recall in a Neuromorphic Olfactory Circuit,” published in Nature Machine Intelligence on March 16.

“This is a result of over a decade of studying olfactory bulb circuitry in rodents and trying to figure out essentially how it works, with an eye towards things we know animals can do that our machines can’t,” Cleland said. 

“We now know enough to make this work. We’ve built this computational model based on this circuitry, guided heavily by things we know about the biological systems’ connectivity and dynamics,” he continued. “Then we say, if this were so, this would work. And the interesting part is that it does work.”

Intel Computer Chip

Cleland was joined by co-author Nabil Imam, a researcher at Intel, and together they applied the algorithm to an Intel computer chip. The chip is called Loihi, and it is neuromorphic, which means it is inspired by the functions of the brain. The chip has digital circuits that mimic the way in which neurons learn and communicate. 

The Loihi chip relies on parallel cores that communicate via discrete spikes, and each one of these spikes has an effect that can change depending on local activity. This requires different strategies for algorithm design than what is used in existing computer chips. 

Through the use of neuromorphic computer chips, machines could work a thousand times faster than a computer’s central or graphics processing units at identifying patterns and carrying out certain tasks. 

The Loihi research chip can also run certain algorithms while using around a thousand times less power than traditional methods. This is well-suited for the algorithm, which can accept input patterns from various different sensors, learn patterns quickly and sequentially, and identify each of the meaningful patterns even with strong sensory interference. The algorithm is capable of successfully identifying odors, and it can do so when the pattern is an astounding 80% different from the pattern originally learned by the computer. 

“The pattern of the signal has been substantially destroyed,” Cleland said, “and yet the system is able to recover it.”

The Mammalian Brain

The brain of a mammal is able to identify and remember smells extremely well, and there can be thousands of olfactory receptors and complex neural networks working to analyze the patterns associated with odors. One of the things that mammals can do better than artificial intelligence systems is retain what they’ve learned, even after there is new knowledge. In deep learning approaches, the network must be presented with everything at once, since new information can affect or even destroy what the system previously learned. 

“When you learn something, it permanently differentiates neurons,” Cleland said. “When you learn one odor, the interneurons are trained to respond to particular configurations, so you get that segregation at the level of interneurons. So on the machine side, we just enhance that and draw a firm line.”

Cleland spoke about how the team came up with new experimental approaches. 

“When you start studying a biological process that becomes more intricate and complex than you can just simply intuit, you have to discipline your mind with a computer model,” he said. “You can’t fuzz your way through it. And that led us to a number of new experimental approaches and ideas that we wouldn’t have come up with just by eyeballing it.”

 

Spread the love
Continue Reading

Big Data

Human Genome Sequencing and Deep Learning Could Lead to a Coronavirus Vaccine – Opinion

mm

Published

on

Human Genome Sequencing and Deep Learning Could Lead to a Coronavirus Vaccine - Opinion

The AI community must collaborate with geneticists, in finding a treatment for those deemed most at risk of coronavirus. A potential treatment could involve removing a person’s cells, editing the DNA and then injecting the cells back in, now hopefully armed with a successful immune response. This is currently being worked on for some other vaccines.

The first step would be sequencing the entire human genome from a sizeable segment of the human population.

Sequencing Human Genomes

Sequencing the first human genome cost $2.7 billion and took nearly 15 years to complete. The current cost of sequencing an entire human has dropped dramatically. As recent as 2015 the cost was $4000, now the cost is less than $1000 per person. This cost could drop a few percentage points more when economies of scale are taken into consideration.

We need to sequence the genome of two different types of patients:

  1. Infected with Coronavirus; but healthy
  2. Infected with Coronavirus; but poor immune response

It is impossible to predict which data point will be most valuable, but each sequenced genome would provide a dataset. The more data the more options there are to locate DNA variations which increase a body’s resistance to the disease vector.

Nations are currently losing trillions of dollars to this outbreak, the cost of $1000 a human genome is minor in comparison. A minimum of 1,000 volunteers for both segments of the population would arm researchers with significant volumes of big data. Should the trial increase in size by one order of magnitude, the AI would have even more training data which would increase the odds of success by several orders of magnitude. The more data the better, which is why a target of 10,000 volunteers should be aimed for.

Machine Learning

While multiple functionalities of machine learning would be present, deep learning would be used to find patterns in the data. For instance, there might be an observation that certain DNA variables correspond to a high immunity, while others correspond to a high mortality. At a minimum we would learn which segments of the human population are more susceptible and should be quarantined.

To decipher this data an Artificial Neural Network (ANN) would be located on the cloud, and sequenced human genomes from around the world would be uploaded. With time being of the essence, parallel computing will reduce the time required for the ANN to work its magic.

We could even take it one step further and use the output data sorted by the ANN,and feed it into a separate system called a Recurrent Neural Network (RNN). The RNN uses reinforcement learning to identify which gene selected by the initial ANN is most successful in a simulated environment. The reinforcement learning agent would gamify the entire process of creating a simulated setting, to test which DNA changes are more effective.

A simulated environment is like a virtual game environment, something many AI companies are well positioned to take advantage of based on their previous success in designing AI algorithms to win at esports. This includes companies such DeepMind and OpenAI.

These companies can use their underlying architecture optimized at mastering video games, to create a stimulated environment, test gene edits, and learn which edits lead to specific desired changes.

Once a gene is identified, another technology is used to make the edits.

CRISPR

Recently, the first ever study using CRISPR to edit DNA inside the human body was approved. This was to treat a rare type of genetic disorder that effects one of every 100,000 newborns. The condition can be caused by mutations in as many as 14 genes that play a role in the growth and operation of the retina. In this case, CRISPR sets out to carefully target DNA and to cause slight temporary damage to the DNA strand, causing the cell to repair itself. It is this restorative healing process which has the potential to restore eyesight.

While we are still waiting for results on if this treatment will work, the precedent of having CRISPR approved for trials in the human body is transformational. Potential disorders which can be treated include improving a body’s immune response to specific disease vectors.

Potentially, we can manipulate the body’s natural genetic resistance to a specific disease. The diseases that could potentially be targeted are diverse, but the community should be focusing on the treatment of the new global epidemic coronavirus.  A threat that if unchecked could lead to a death sentence to a large percentage of our population.

FINAL THOUGHTS

While there are many potential options to achieving success, it will require that geneticists, epidemiologists, and machine learning specialists unify. A potential treatment option may be as described above, or may be revealed to be unimaginably different, the opportunity lies in the genome sequencing of a large segment of the population.

Deep learning is the best analysis tool that humans have ever created; we need to at a minimum attempt to use it to create a vaccine.

When we take into consideration what is currently at risk with this current epidemic, these three scientific communities need to come together to work on a cure.

Spread the love
Continue Reading