stub Machine-Learning Program Connects to Human Brain and Commands Robots - Unite.AI
Connect with us

Robotics

Machine-Learning Program Connects to Human Brain and Commands Robots

Published

 on

Researchers at Ecole Polytechnique Fédérale de Lausanne have developed a machine-learning program that can be connected to a human brain and used to command a robot. The program can alter the robot’s movements based on electrical signals from the brain. 

These new advancements could assist tetraplegic patients who are unable to speak or perform movements. It builds upon the great deal of work that has been done in the past to develop systems that help these patients complete tasks on their own. 

The study was published in Communications Biology

Prof. Aude Billard is the head of EPFL’s Learning Algorithms and Systems Laboratory. 

“People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” Billard said. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”

Moving the Robot With Thoughts

Along with José del R. Millán, Prof. Billard and the two research groups developed the computer program, which needs no voice control or touch funcion. The patients can move the robot with just their thoughts.

The researchers started developing the system by first basing it off a robotic arm that had been developed years ago. It can move back and forth from right to left, as well as reposition objects in front of it and get around objects in its path. 

“In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” Prof. Billard says. 

The researchers then improved the robot’s mechanism for avoiding obstacles so that it would be more precise.

Carolina Gaspar Pinto Ramon Correia is a PhD student at Prof. Billard’s lab. 

“At first, the robot would choose a path that was too wide for some obstacles, taking it too far away, and not wide enough for others, keeping it too close,” says Correia. “Since the goal of our robot was to help paralyzed patients, we had to find a way for users to be able to communicate with it that didn't require speaking or moving.”

Mind-controlled robots now one step closer

Developing the Algorithm

In order to do this, they had to develop an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was attached to a headcap equipped with electrodes for running EEG scans of a patient’s brain activity.

The patient only needs to look at the robot in order to use the system. When the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, which indicates to the robot that it is doing a wrong action. The robot will not understand why it is receiving the signal at first, but the error message is then fed into the algorithm. The algorithm uses an inverse reinforcement learning approach to figure out what the patient wants and what actions the robot should take. 

The trial-and-error process means the robot tries out different movements to see which is correct, and only three to five attempts are usually required to figure out the right response.

“The robot's AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” 

Iason Batzianoulis is the study's lead author.

“What was particularly difficult in our study was linking a patient's brain activity to the robot's control system — or in other words, ‘translating' a patient's brain signals into actions performed by the robot,” Batzianoulis says. “We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.”

The researchers believe that the algorithm could eventually be used to control wheelchairs. 

“For now there are still a lot of engineering hurdles to overcome,” says Prof. Billard. “And wheelchairs pose an entirely new set of challenges, since both the patient and the robot are in motion.”

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.