Scientists from Ecole Polytechnique Fédérale de Lausanne are working on new ways to improve the control of robotic hands, especially for amputees. They have developed a way to combine individual finger control and automation to help improve grasping and manipulation. They tested this idea of neuroengineering and robotics on three different amputees and seven healthy people. The results of the study were published in Nature Machine Intelligence.
This newly developed technology combines two separate fields for robotic hand control. This is something that has not been done before, and it is following the new field of shared control in neuroprosthetics.
One of the new concepts comes from neuroengineering. The intended finger movement is identified by reading the muscular activity on the amputee’s stump. This is then used for individual finger control of the prosthetic hand. The other concept comes from robotics. The robotic hand is able to grab objects and keep in contact with them by grasping.
“When you hold an object in your hand, and it starts to slip, you only have a couple of milliseconds to react,” explains Aude Billard, who leads EPFL’s Learning Algorithms and Systems Laboratory. “The robotic hand has the ability to react within 400 milliseconds. Equipped with pressure sensors all along the fingers, it can react and stabilize the object before the brain can actually perceive that the object is slipping.”
The process starts by the algorithm learning how to decipher the user’s intention, and it then translates that into finger movement of the prosthetic hand. In order for this to happen, the amputee first has to train the algorithm that uses machine learning by performing a series of hand movements. Sensors are used on the amputee’s stump, and they can detect certain muscular activity. The algorithm then learns and connects the hand movements and their corresponding muscular activity. Eventually, the algorithm will know the user’s intended finger movements, and then the individual fingers can be controlled on the prosthetic hand.
Katie Zhuang is the first author of the publication. She spoke about the machine learning algorithm.
“Because muscle signals can be noisy, we need a machine learning algorithm that extracts meaningful activity from those muscles and interprets them into movements,” she said.
The scientists then went on to engineer the algorithm so that when a user tries to grasp an object, robotic automation is initiated. The algorithm will relay to the prosthetic hand to close its fingers and grasp when an object comes in contact with sensors. The sensors are located on the surface of the prosthetic hand. The scientists created this new system based on an adaptation from a previous study. In that study, robotic arms were designed to identify the shape of objects and then grasp them. They did this based solely on tactile information, and there was no reliance on visual signals.
There are still challenges ahead before this technology can be effectively used among people and become a commercially viable option for amputees looking for prosthetic hands. However, this technology is a huge step forward in the field, and it will continue to push the idea of merging human and robotics. As of right now, the algorithm is still being tested on a robot.
“Our shared approach to control robotic hands could be used in several neuroprosthetic applications such as bionic hand prostheses and brain-to-machine interfaces, increasing the clinical impact and usability of these devices,” says Silvestro Micera, EPFL’s Bertarelli Foundation Chair in Translational Neuroengineering, and Professor of Bioelectronics at Scuola Superiore Sant’Anna.
Study Suggests Robots Are More Persuasive When They Pretend To Be Human
Advances in artificial intelligence have created bots and machines that can potentially pass as humans if they interact with people exclusively through a digital medium. Recently, a team of computer science researchers have studied how robots/machines and humans interact when the humans believe that the robots are also human. As reported by ScienceDaily, the results of the study found that people find robots/chatbots more persuasive when they believe the bots are human.
Talal Rahwan, the associate professor of Computer Science at NYU Abu Dhabi, has recently led a study that examined how robots and humans interact with each other. The results of the experiment were published in Nature Machine Intelligence in a report called Transparency-Efficiency Tradeoff in Human-Machine Cooperation. During the course of the study, test subjects were instructed to play a cooperative game with a partner, and the partner may be either a human or a bot.
The game was a twist on the classic Prisoner’s Dilemma, where participants must decide whether or not to cooperate or betray the other on every round. In a prisoner’s dilemma, one side may choose to defect and betray their partner to achieve a benefit at cost to the other player, and only by cooperating can both sides assure themselves of gain.
The researchers manipulated their test subjects by providing them with either correct or incorrect information about the identity of their partner. Some of the participants were told that they were playing with a bot, even though their partner was actually human. Other participants were in the inverse situation. Over the course of the experiment, the research team was able to quantify if people treated partners differently when they were told their partners were bots. The researchers tracked the degree to which any prejudice against the bots existed, and how these attitudes impacted interactions with bots who identified themselves.
The results of the experiment demonstrated that bots were more effective at engendering cooperation from their partners when the human believed that the bot was also a human. However, when it was revealed that the bot was a bot, cooperation levels dropped. Rahwan explained that while many scientists and ethicists agree that AI should be transparent regarding how decisions are made, it’s less clear that they should also be transparent about their nature when communicating with others.
Last year, Google Duplex made a splash when a stage demo showed that it was capable of making phone calls and booking appointments on behalf of its use, generating human-like speech so sophisticated that many people would have mistaken it for a real person had they not been told they were speaking to a bot. Since the debut of Google Duplex, many AI and robot ethicists voiced their concerns over the technology, prompting Google to say that it would have the agent identify itself as a bot in the future. Currently, Google Duplex is only being used in a very limited capacity. It will soon see use in New Zealand, but only to check for the operating hours of businesses. Ethicists are still worried about the degree to which the technology could be misused.
Rahawan argues that the recent study demonstrates that we should consider what costs we are willing to pay in return for transparency:
“Is it ethical to develop such a system? Should we prohibit bots from passing as humans, and force them to be transparent about who they are? If the answer is ‘Yes’, then our findings highlight the need to set standards for the efficiency cost that we are willing to pay in return for such transparency.”
Flexible Robot “Grows” Like a Plant
Engineers from MIT have designed a robot that can extend a chain-like appendage. This makes the robot extremely flexible, and it can configure itself in multiple different ways. At the same time, it is strong enough to support heavy weight or apply torque, making it capable of assembling parts in small spaces. After completing its tasks, the robot is able to retract the appendage, and it can extend it again with a different length and shape.
This newly developed robot can make a difference in areas like warehouses, where most of the robots are not able to put themselves in narrow spaces. The new plant-like robot can be used to grab products at the back of a shelf, and it can even move around a car’s engine parts to unscrew an oil cap.
The design was inspired by plants and the way they grow. In that process, nutrients are transported to the plant’s tip as a fluid. Once they reach the tip, they are converted into solid material that produces, a little at a time, a supportive stem.
The plant-like robot has a “growing point” or gearbox, which draws a loose chain of interlocking blocks into the box. Once there, gears lock the chain units together and release the chain, unit by unit, until it forms a rigid appendage.
Team of Engineers
The new robot was presented this week at the IEEE International Conference on Intelligent Robots and Systems (IROS) in Macau. In the future, the engineers would like to add on grippers, cameras, and sensors that could be mounted onto the gearbox. This would allow the robot to tighten a loose screw after making its way through an aircraft’s propulsion system. It could also retrieve a product without disturbing anything in the near surroundings.
Harry Asada is a professor of mechanical engineering at MIT.
“Think about changing the oil in your car,” Asada says. “After you open the engine roof, you have to be flexible enough to make sharp turns, left and right, to get to the oil filter, and then you have to be strong enough to twist the oil filter cap to remove it.”
Tongxi Yan is a former graduate student in Asada’s lab, and he led the work.
“Now we have a robot that can potentially accomplish such tasks,” he says. “It can grow, retract, and grow again to a different shape, to adapt to its environment.”
The team of engineers also consisted of MIT graduate student Emily Kamienski and visiting scholar Seiichi Teshigawara.
After defining the different aspects of plant growth, the team looked to implement it into a robot.
“The realization of the robot is totally different from a real plant, but it exhibits the same kind of functionality, at a certain abstract level,” Asada says.
The gearbox was designed to represent the robot’s “growing tip,” which is the equivalent of a bud of a plant. That is where most nutrients flow up to the site, and the tip builds a rigid stem. The box consists of a system of gears and motors, and they pull up a fluidized material. For this robot, it is a sequence of 3-D printed plastic units that are connected with each other.
The robot is capable of being programmed to choose which units to lock together and which to leave unlocked. This allows it to form specific shapes and “grow” in specific directions.
“It can be locked in different places to be curved in different ways, and have a wide range of motions,” Yan says.
The chain is able to support a one-pound weight when locked and rigid. If a gripper were to be attached, the researchers believe it would be able to grow long enough to maneuver through a narrow space, and perform tasks such as unscrewing a cap.
Researchers Develop Resilient RoboBee with Soft Muscles
Researchers at the Harvard Microrobotics Laboratory at the Harvard John A. Paulson School of Engineering and Applied Science (SEAS), along with the Wyss Institute for Biologically Inspired Engineering, have developed a RoboBee powered by soft artificial muscles. The microrobot is capable of crashing into walls, falling on the ground, and colliding with other RoboBees without suffering damage. In what is a big moment for robotics, the RoboBee is the first microrobot powered by soft actuators that is able to achieve controlled flight.
Yufeng Chen is first author of the paper and a former graduate student and postdoctoral fellow at SEAS.
“There has been a big push in the field of microrobotics to make mobile robots out of soft actuators because they are so resilient,” said Chen. “However, many people in the field have been skeptical that they could be used for flying robots because the power density of those actuators simply hasn’t been high enough and they are notoriously difficult to control. Our actuator has high enough power density and controllability to achieve hovering flight.”
The research was published in Nature.
One of the problems that the researchers dealt with was power density. They looked to the electrically-driven soft actuators that were developed in the lab of David Clarke, the Extended Tarr Family Professor of Materials. The soft actuators are created by using dielectric elastomers, which are soft materials that have strong insulating properties. When an electric field is applied, the dielectric elastomers deform.
After improving the electrode conductivity, the actuator was able to be operated at 500 Hertz. This is similar to previously used rigid actuators in robots.
One of the other issues with soft actuators is that the system often becomes unstable. To get past this, the researchers developed a lightweight airframe. It consisted of a piece of vertical constraining thread in order to prevent the actuator from buckling.
Within the small scale robots, the soft actuators are able to be swapped out and assembled easily. The researchers developed multiple different models of the soft-powered RoboBee in order to showcase the various flight capabilities.
One of the models has two wings, and it can take off from the ground. However, this model has no further control. A four-wing, two actuator model is capable of flying in a crowded environment. Within a single flight, the RoboBee is able to avoid multiple collisions.
Elizabeth Farrell Helbling is a former graduate student at SEAS, and she co-authored the paper.
“One advantage of small-scale, low-mass robots is their resilience to external impacts,” she said. “The soft actuator provides an additional benefit because it can absorb impact better than traditional actuation strategies. This would come in handy in potential applications such as flying through rubble for search and rescue missions.”
Another model is the eight-wing, four-actuator RoboBee. It is capable of performing controlled hovering flight, which is the first time it has been demonstrated by a soft-powered flying microrobot.
The researchers are now looking to increase the efficiency of the soft-powered RoboBee. It still has a long way to go before catching up to traditional flying robots.
Robert Wood is a Charles River Professor of Engineering and Applied Sciences in SEAS. He is also a core faculty member of the Wyss Institute for Biologically Inspired Engineering and senior author of the paper.
“Soft actuators with muscle-like properties and electrical activation represent a grand challenge in robotics,” says Professor Wood. “If we could engineer high-performance artificial muscles, the sky is the limit for what robots we could build.”
- AI System Automatically Transforms To Evade Censorship Attempts
- Optical Switch Can Reroute Light Between Chips Extremely Fast
- New AI Powered Tool Enables Video Editing From Themed Text Documents
- How we can use Deep Learning with Small Data? – Thought Leaders
- A New AI System Could Create More Hope For People With Epilepsy