Connect with us

Reinforcement Learning

DeepMind Creates AI That Replays Memories Like The Hippocampus

mm

Updated

 on

The human brain often recalls past memories (seemingly) unprompted. As we go throughout our day, we have spontaneous flashes of memory from our lives. While this spontaneous conjuration of memories has long been of interest to neuroscientists, AI research company DeepMind recently published a paper detailing how an AI of theirs replicated this strange pattern of recall.

The conjuration of memories in the brain, neural replay, is tightly linked with the hippocampus. The hippocampus is a seahorse-shaped formation in the brain that belongs to the limbic system, and it is associated with the formation of new memories, as well as the emotions that memories spark. Current theories on the role of the hippocampi (there is one in each hemisphere of the brain), state that different regions of the hippocampus are responsible for the handling of different types of memories. For instance, spatial memory is believed to be handled in the rear region of the hippocampus.

As reported by Jesus Rodriguez,  Dr. John O’Keefe is responsible for many contributions to our understanding of the hippocampus, including the hippocampal “place” cells. The place cells in the hippocampus are triggered by stimuli in a specific environment. As an example, experiments on rats showed that specific neurons would fire when the rats ran through certain portions of a track. Researchers continued to monitor the rats even when they were resting, and they found that the same patterns of neurons denoting a portion of the maze would fire, although they fired at an accelerated speed. The rats seemed to be replaying the memories of the maze in their minds.

In humans, recalling memories is an important part of the learning process, but when trying to enable AI to learn, it is difficult to recreate the phenomenon.

The DeepMind team set about trying to recreate the phenomenon of recall using reinforcement learning. Reinforcement learning algorithms work by getting feedback from their interactions with the environment around them, getting rewarded whenever they take actions that bring them closer to the desired goal. In this context, the reinforcement learning agent records events and then plays them back at later times, with the system being reinforced to improve how efficiently it ends up recalling past experiences.

DeepMind added the replaying of experiences to a reinforcement learning algorithm using a replay buffer that would playback memories/recorded experiences to the system at specific times. Some versions of the system had the experiences played back in random orders while other models had pre-selected playback orders. While the researchers experimented with the order of playback for the reinforcement agents, they also experimented with different methods of replaying the experiences themselves.

There are two primary methods that are used to provide reinforcement algorithms with recalled experiences. These methods are the imagination replay method and the movie replay method. The DeepMind paper uses an analogy to describe both of the strategies:

“Suppose you come home and, to your surprise and dismay, discover water pooling on your beautiful wooden floors. Stepping into the dining room, you find a broken vase. Then you hear a whimper, and you glance out the patio door to see your dog looking very guilty.”

As reported by Rodriguez, the imagination replay method doesn’t record the events in the order that they were experienced. Rather, a probable cause between the events is inferred. The events are inferred based on the agent’s understanding of the world. Meanwhile, the movie replay method stores memories in the order in which the events occurred, and replays the sequence of stimuli – “spilled water, broken vase, dog”. The chronological ordering of events is preserved.

Research from the field of neuroscience implies that the movie replay method is integral to the creation of associations between concepts and the connection of neurons between events. Yet the imagination replay method could help the agent create new sequences when it reasons by analogy. For instance, the agent could reason that if a barrel is to oil as a vase is to water, a barrel could be spilled by a factory robot instead of a dog. Indeed, when DeepMind probed further into the possibilities of the imagination replay method, they found that their learning agent was able to create impressive, innovative sequences by taking previous experiences into account.

Most of the current progress being made in the area of reinforcement learning memory is being made with the movie strategy, although researchers have recently begun to make progress with the imagination strategy. Research into both methods of AI memory can not only enable better performance from reinforcement learning agents, but they can also help us gain new insight into how the human mind might function.

Spread the love

Reinforcement Learning

Engineers Develop New Machine-Learning Method Capable of Cutting Energy Use

Updated

 on

Engineers at Swiss Center for Electronics and Microtechnology have developed a new machine-learning method capable of cutting energy use, as well as allowing artificial intelligence (AI) to complete tasks that were once considered too sensitive. 

Reinforcement Learning Limitations

Reinforcement learning, where a computer continuously improves upon itself by learning from its past experiences, is a major aspect of artificial intelligence. However, this technology is oftentimes difficult to apply to real-life scenarios and situations, such as training climate-control systems. Applications such as this are not able to deal with drastic changes in temperatures, which would be brought on by reinforcement learning. 

This exact issue is what the CSEM engineers set out to address, and that is when they came up with the new approach. The engineers demonstrated that simplified theoretical models could first be used to train computers, and then they would turn to real-life systems. This allows for the machine learning process to be more accurate by the time it reaches the real-life system, learning from its previous trial-and-errors with the theoretical model. This means that there will be no drastic fluctuations for the real-life system, solving the example issue with climate-control technology. 

Pierre-Jean Alet is head of smart energy systems research at CSEM, as well as co-author of the study. 

“It’s like learning the driver’s manual before you start a car,” Alet says. “With this pre-training step, computers build up a knowledge base they can draw on so they aren’t flying blind as they search for the right answer.”

Energy Cuts

One of the most important aspects of this new method is that it can cut energy use by over 20%. The engineers tested the method on a heating, ventilation and air conditioning (HVAC) system, which was located in a 100-room building. 

The engineers relied on three steps, the first of which was training a computer on a “virtual mode.” This model was constructed through simple equations explaining the behavior of the building. Real building data such as temperature, weather conditions and other variables were then fed to the computer, which resulted in more accurate training. The last step was to allow the computer to run the reinforcement learning algorithms, which would eventually result in the best approach forward for the HVAC system. 

The new method developed by the CSEM engineers could have big implications for machine learning. Many applications that were once thought to be “untouchable” by reinforcement learning, like those with large fluctuations, could now be approached in a new manner. This would result in lower energy usage, lower financial costs and many other benefits. 

The research was published in the journal IEEE Transactions on Neural Networks and Learning Systems, titled “A hybrid learning method for system identification and optimal control.” 

The authors include: Baptiste Schubnel, Rafael E. Carrillo, Pierre-Jean Alet and Andreas Hutter. 

 

Spread the love
Continue Reading

Reinforcement Learning

Artificial Intelligence System Able to Move Individual Molecules

Updated

 on

Image: Forschungszentrum Jülich / Christian Wagner

Scientists from Jülich and Berlin have developed an artificial intelligence system that is capable of autonomously learning how to move individual molecules through the use of a scanning tunneling microscope. Because atoms and molecules do not act like macroscopic objects, each one of these building blocks needs its own system for moving. 

The new method, which the scientists believe can be used for research and production technologies like molecular 3D printing, was published in Science Advances

3D Printing

Rapid prototyping, more commonly known as 3D printing, is extremely cost effective when it comes to creating prototypes or models. It has been increasing in importance over the years as the technology has constantly improved, and it is now a major tool used by industry.

Dr. Christian Wagner is head of the ERC working group on molecular manipulation at Forschungszentrum Jülich. 

“If this concept could be transferred to the nanoscale to allow individual molecules to be specifically put together or separated again just like LEGO bricks, the possibilities would be almost endless, given that there are around 1060 conceivable types of molecular manipulation at Forschungszentrum Jülich,” Wagner says.

Individual “Recipes”

One of the main challenges is the individual “recipes” needed in order for the scanning tunneling microscope to move individual molecules back and forth. These are needed so that the tip of the microscope can arrange molecules spatially and in a targeted manner.

The so-called recipe can not be calculated or deduced by intuition, which is due to the complex nature of the mechanics on the nanoscale. The way the microscope works is by having a rigid cone at the tip, which the molecules lightly stick to. In order for those molecules to move around, complex movement patterns are required. 

Prof. Dr. Stefan Tautz is head of the Quantum Nanoscience Institute at Jülich.

“To date, such targeted movement of molecules has only been possible by hand, through trial and error. But with the help of a self-learning, autonomous software control system, we have now succeeded for the first time in finding a solution for this diversity and variability on the nanoscale, and in automating this process,” Tautz says. 

Reinforcement Learning

One of the fundamental aspects of this development is reinforcement learning, which is a type of machine learning that involves the algorithm repeatedly attempting a task and learning from each attempt. 

Prof. Dr. Klaus-Robert Müller is head of the Machine Learning department at TU Berlin.

“We do not prescribe a solution pathway for the software agent, but rather reward success and penalize failure,” he says.

“In our case, the agent was given the task of removing individual molecules from a layer in which they are held by a complex network of chemical bonds. To be precise, these were perylene molecules, such as those used in dyes and organic light-emitting diodes,” Dr. Christian Wagner adds. 

There is a key point at which the force required to move the molecules cannot exceed the strength of the bond where the tunneling microscope attracts the molecule.

“The microscope tip therefore has to execute a special movement pattern, which we previously had to discover by hand, quite literally,” Wagner says. 

Reinforcement learning is used while the software agent learns which movements work, and it continues to improve each time.

However, the tip of the scanning tunneling microscope consists of metal atoms, which can shift, and this changes the bond strength of the molecule.

“Every new attempt makes the risk of a change and thus the breakage of the bond between tip and molecule greater. The software agent is therefore forced to learn particularly quickly, since its experiences can become obsolete at any time,” Prof. Dr. Stefan Tautz says. “It’s a little as if the road network, traffic laws, bodywork, and rules for operating the vehicles are constantly changing while driving autonomously.” 

In order to get past this, the researchers developed the software so that it learns a simple model of the environment where the manipulation happens in parallel with the initial cycles. In order to quicken the learning process, the agent simultaneously trains in reality and in its own model.

“This is the first time ever that we have succeeded in bringing together artificial intelligence and nanotechnology,” Klaus-Robert Müller says. 

“Up until now, this has only been a ‘proof of principle,’” Tautz continues. “However, we are confident that our work will pave the way for the robot-assisted automated construction of functional supramolecular structures, such as molecular transistors, memory cells, or quibits — with a speed, precision, and reliability far in excess of what is currently possible.” 

 

Spread the love
Continue Reading

Reinforcement Learning

AI Model Might Let Game Developers Generate Lifelike Animations

mm

Updated

 on

A team of researchers at Electronic Arts have recently experimented with various artificial intelligence algorithms, including reinforcement learning models, to automate aspects of video game creation. The researchers hope that the AI models can save their developers and animators time doing repetitive tasks like coding character movement.

Designing a video game, particularly the large, triple-A video games designed by large game companies, requires thousands of hours of work. As video game consoles, computers, and mobile devices become more powerful, video games themselves become more and more complex. Game developers are searching for ways to produce more game content with less effort, for example, they often choose to use procedural generation algorithms to produce landscapes and environments. Similarly, artificial intelligence algorithms can be used to generate video game levels, automate game testing, and even animate character movements.

Character animations for video games are often completed with the assistance of motion capture systems, which track the movements of real actors to ensure more life-like animations. However, this approach does have limitations. Not only does the code that drives the animations still need to be written, but animators are also limited only to the actions that have been captured.

As Wired reported, researchers from EA set out to automate this process and save both time and money on these animations. The team of researchers demonstrated that a reinforcement learning algorithm could be used to create a human model that moves in realistic fashions, without the need to manually record and code the movements. The research team used “Motion Variational Autoencoders” (Motion VAEs) to identify relevant patterns of movement from motion-capture data. After the autoencoders extracted the movement patterns, a reinforcement learning system was trained with the data, with the goal of creating realistic animations based on certain objectives (such as running after a ball in a soccer game). The planning and control algorithms used by the research team were able to generate the desired motions, even producing motions that weren’t in the original set of motion-capture data. This means that after learning how a subject walks, the reinforcement learning model can determine what running looks like.

Julian Togelius, NYU professor and AI tools company Modl.ai co-founder was quoted by Wired as saying that the technology could be quite useful in the future and is likely to change how content for games is created.

“Procedural animation will be a huge thing. It basically automates a lot of the work that goes into building game content,” Togelius said to Wired.

According to professor Michiel van de Panne from UBC, who was involved with the reinforcement learning project, the research team is looking to take the concept further by animating non-human avatars with the same process. Van de Panne said to Wired that although the process of creating new animations can be quite difficult, he is confident the technology will be able to render appealing animations someday.

Other applications of AI in the development of video games include the generation of basic games. For instance, researchers at the University of Toronto managed to design a generative adversarial network that could recreate the game Pac-Man without access to any of the code used to design the game. Elsewhere, researchers from the University of Alberta used  AI models to generate levels of video games based off on the rules of different games like Super Mario Bros. and Mega Man.

Spread the love
Continue Reading