Connect with us

Robotics

New Research Sheds Light on Human-Robot Trust

Published

 on

New Research Sheds Light on Human-Robot Trust

New research, led by the U.S. Army Research Laboratory along with the University of Central Florida Institute for Simulations and Training, is shedding light on the level of trust humans have for robots. The new project focused on the relationship between humans and robots, and whether humans give more value to a robot’s reasoning or its mistakes. 

The new research looked into human-agent teaming, or HAT, and how human trust, workload, and perceptions of an agent are influenced by the transparency of those agents such as robots, unmanned vehicles, and software agents. Agent transparency is when a human is able to identify the intent, reasoning process, and future plans of agents. 

The new research suggests that human confidence in robots decreases whenever the robot makes a mistake. This is regardless of whether or not the robot has been transparent with its reasoning process. 

The new research was published in the August edition of IEEE-Transactions on Human-Machine Systems. The paper was titled “Agent Transparency and Reliability in Human-Robot Interaction: The Influence on User Confidence and Perceived Reliability.” 

Traditional research dealing with human-agent teaming uses completely reliable intelligent agents that make no mistakes. However, this new study was one of the few that explored how agent transparency interacts with agent reliability. The study involved a robot that made mistakes while humans were watching, and the humans were then asked if they viewed the robot as less reliable. During the entire process, the humans were given insight into the robot’s reasoning process. 

Dr. Julia Wright is the principal investigator for the project, and she is a researcher at U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, or ARL. 

“Understanding how the robot’s behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members,” she said “This research contributes to the Army’s Multi-Domain Operations efforts to ensure overmatch in artificial intelligence-enabled capabilities. But it is also interdisciplinary, as its findings will inform the work of psychologists, roboticists, engineers, and system designers who are working toward facilitating better understanding between humans and autonomous agents in the effort to make autonomous teammates rather than simply tools.”

This new project was part of a larger one known as the Autonomous Squad Member (ASM) project that is sponsored by the Office of Secretary of Defense’s Autonomy Research Pilot Initiative. The ASM is an actual small ground robot that is used within an infantry squad. It is able to communicate and interact with the squad. 

The study involved participants observing human-agent soldier teams in a simulated environment. The ASM was part of the team, and it moved through a training course. The task for the observers was to monitor the team and evaluate the robot. Throughout the training course, the team was presented with various different events and obstacles. The soldiers were able to navigate each one correctly, but there were times when the robot could not understand the obstacle and made mistakes. The robot then sometimes shared its reasoning behind certain actions as well as the expected outcome. 

The study found that the participants were more concerned with the robot’s mistakes compared to the underlying logic and reasoning behind them. The robot’s reliability played a major role in the participant’s trust and perceptions. Whenever the robot made a mistake, the observers rated it’s reliability lower. 

The reliability and trust increased whenever the agent transparency was increased, or when the robot shared details and reasoning behind its decision. However, the reliability and trust was still lower than robots that never suffered an error. This suggested that the sharing of reasoning and underlying logic could help with some of the trust and reliability issues surrounding robots. 

“Earlier studies suggest that context matters in determining the usefulness of transparency information,” Wright said. “We need to better understand which tasks require more in-depth understanding of the agent’s reasoning, and how to discern what that depth would entail. Future research should explore ways to deliver transparency information based on the tasking requirements.”

This new research will play a critical role in the field because of the increasing interaction that is taking place between humans and robots. One of the areas which will be the most important is the military. As seen in these exercises, robots and soldiers are eventually going to be side by side. Just as a soldier has to have trust in another soldier, the same will apply to robots. If that is able to be achieved and robots became commonplace in infantry squads, it will be another instance of artificial intelligence penetrating the defense industry.

 

Spread the love

Robotics

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

mm

Published

on

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

The AI startup Covariant and the industrial robotics company ABB will be partnering to engineer sophisticated robots that can pick up and manipulate a wide variety of objects. These robots will be used in warehouses and other industrial settings.

As Fortune reported, the industrial robotics company ABB is primarily involved in the creation of robotics for car manufacturers, but the company wants to branch out to other sectors. ABB is aiming to become involved in logistics, where its robots will be used in large warehouses, such as those run by Amazon, to manipulate items, package goods, and make shipments.

According to ABB president Sami Atiya, according to Fortune, ABB sought partners that were experienced in the creation of sophisticated computer vision applications. While the company currently uses computer vision algorithms to operate some of its robots, ABB aimed to push the envelope and create reliable, high-dexterity robots capable of maneuvering and manipulating thousands of different objects.

The company examined many different companies before settling on Covariant as its partner. Covariant is a robotics research company whose researchers come from places like OpenAI and the University of California Berekely. Covariant managed to produce the only software examined by ABB that could reliably recognize many different items without the intervention of human operators.

The computer vision and robotics applications developed by Covariant were trained with reinforcement learning. Thanks to deep neural networks and reinforcement learning, Covariant was able to create software that learns through experience and is able to reliably and consistently recognize objects once a pattern has been learned. The CEO of Covariant, Peter Chen, was interviewed by Fortune. Chen explained that as more robotics companies like ABB brain out into new industries and markets, the goal becomes the creation of robots capable of a wider variety of tasks than those currently used in many manufacturing and logistics operations. Most of the robots employed in industrial capacities are only capable of doing a handful of very specific things. Chen explained that the goal is to create robots capable of adaptation.

As a result of the partnership with Covariant, ABB will get insight into the technology that drives Covariant’s AI, and this knowledge could help them better integrate AI into the tech that powers their existing robots. Currently, Covariant is a fairly small operation with only a handful of robots in full-time operational status, spread out across industries like the electronics industry, the pharmaceutical industry, and the apparel industry. However, its collaboration with ABB could cause it to see substantial growth.

The partnership between Covariant and ABB highlights the increasing role of AI startups in the robotics field. Other examples of AI startups collaborating with robotics companies includes the Japanese corporation IHI establishing a partnership with the AI startup Osaro. The joint collaboration also concerned the use of robots to grasp and manipulate objects.

While there is currently a lot of focus on robots automating away human jobs, in some industries there simply aren’t enough humans to do those jobs, to begin with. A recent report about the logistics sector estimates that over half of all logistics companies will face staff shortages over the course of the next five years. There will be a particular shortage of warehouse workers over the next half-decade. The report suggests that causes of the labor shortage within the logistics industry are falling unemployment rates, long hours, tedious work, and low wages.

Spread the love
Continue Reading

Robotics

Researchers Bring Sense of Touch to Robotic Finger

Published

on

Researchers Bring Sense of Touch to Robotic Finger

Researchers at Columbia Engineering have brought a sense of touch to a newly developed robotic finger. It is able to localize touch with extremely high precision over large, multicurved surfaces. The new development puts robotics one step closer to reaching human-like status. 

Matei Ciocarlie is an associate professor in the departments of mechanical engineering and computer science. Ciocarlie led the research in collaboration with Electrical Engineering Professor Ioannis (John) Kymissis. 

“There has long been a gap between stand-alone tactile sensors and fully integrated tactile fingers — tactile sensing is still far from ubiquitous in robotic manipulation,” says Ciocarlie. “In this paper, we have demonstrated a multicurved robotic finger with accurate touch localization and normal force detection over complex 3D surfaces.”

The current methods that are used to integrate touch sensors into robot fingers face many challenges. It is difficult to cover multicurved surfaces, there is a high wire count, and difficulty fitting the sensors into small fingertips, which prevents the use in dexterous hands. The Columbia Engineering team got around these challenges by developing a new approach: they used overlapping signals from light emitters and receivers that are embedded in a transparent waveguide layer covering the functional areas of the finger. 

The team was able to obtain a signal data set that changes in response to deformation of the finger due to touch. They did this by measuring light transport between every emitter and receiver. Useful information, such as contact location and applied normal force, was then extracted from the data through the use of data-driven deep learning methods. The team was able to do this without the use of analytical models. 

Through this method, the research team developed a fully integrated, sensorized robot finger that has a low wire count. It was built through the use of accessible manufacturing methods and can be easily integrated into dexterous hands. 

The study was published online in IEEE/ASME Transactions on Mechatronics

The first part of the project was the use of light to sense touch. There is a layer of transparent silicone underneath the “skin” of the finger, and the team shined light into it from more than 30 LEDs. The finger also has over 30 photodiodes that are responsible for measuring how the light bounces around. As soon as the finger comes into contact with something, the skin deforms and light moves around in the transparent layer underneath the skin. The researchers then measure how much light goes from every LED to every diode in order to come up with about 1,000 signals. Each one of those signals contains information about the contact made.

“The human finger provides incredibly rich contact information — more than 400 tiny touch sensors in every square centimeter of skin!” says Ciocarlie. “That was the model that pushed us to try and get as much data as possible from our finger. It was critical to be sure all contacts on all sides of the finger were covered — we essentially built a tactile robot finger with no blind spots.”

The second part of the project was the team designing the data to be processed by machine learning algorithms. The data is extremely complex and cannot be interpreted by humans. However, current machine learning techniques can learn to extract specific information, such as where the finger is being touched, what is touching the finger, and how much force is being applied. 

“Our results show that a deep neural network can extract this information with very high accuracy,” says Kymissis. “Our device is truly a tactile finger designed from the very beginning to be used in conjunction with AI algorithms.”

The team also designed the finger so that it can be used on robotic hands. The finger is able to collect nearly 1,000 signals, but it only requires one 14-wire cable connecting it to the hand. There are also no complex off-board electronics required for it to function. 

The team currently has two dexterous hands that are being integrated with the fingers, and they will look to use the hands to demonstrate dexterous manipulation abilities.

“Dexterous robotic manipulation is needed now in fields such as manufacturing and logistics, and is one of the technologies that, in the longer term, are needed to enable personal robotic assistance in other areas, such as healthcare or service domains,” says Ciocarlie.

 

Spread the love
Continue Reading

Robotics

Swarm Robots Help Self-Driving Cars Avoid Collisions

Published

on

Swarm Robots Help Self-Driving Cars Avoid Collisions

The top priority for companies developing self-driving vehicles is that they can safely navigate and avoid crashing or causing traffic jams. Northwestern University has brought that reality one step closer with the development of the first decentralized algorithm with a collision-free, deadlock-free guarantee. 

The algorithm was tested by the researchers in a simulation of 1,024 robots, as well as a swarm of 100 real robots in the lab. Within a minute, the robots were able to reliably, safely, and efficiently converge to form a predetermined shape in less than a minute. 

Northwestern’s Michael Rubenstein led the study. He is the Lisa Wissner-Slivka and Benjamin Slivka Professor in Computer Science in Northwestern’s McCormick School of Engineering. 

“If you have many autonomous vehicles on the road, you don’t want them to collide with one another or get stuck in a deadlock,” said Rubenstein. “By understanding how to control our swarm robots to form shapes, we can understand how to control fleets of autonomous vehicles as they interact with each other.”

The paper is set to be published in the journal IEEE Transactions on Robotics later this month. 

There is an advantage in using a swarm of small robots compared to one large robot or a swarm led by one robot; there is a lack of centralized control. Centralized control can become a major reason for failure, and Rubenstein’s decentralized algorithm acts as a fail-safe. 

“If the system is centralized and a robot stops working, then the entire system fails,” Rubenstein said. “In a decentralized system, there is no leader telling all the other robots what to do. Each robot makes its own decisions. If one robot fails in a swarm, the swarm can still accomplish the task.”

In order to avoid collisions and jams, the robots coordinate with each other. The ground beneath the robots acts as a grid for the algorithm, and each robot is aware of its position on the grid due to technology similar to GPS. 

Prior to undertaking movement from one spot to another, each robot relies on sensors to communicate with the others. By doing this, it is able to determine whether or not other spaces on the grid are vacant or occupied. 

“The robots refuse to move to a spot until that spot is free and until they know that no other robots are moving to that same spot,” Rubenstein said. “They are careful and reserve a space ahead of time.”

The robots are able to communicate with each other in order to form a shape, and this is possible due to the near-sightedness of the robots. 

“Each robot can only sense three or four of its closest neighbors,” Rubenstein explained. “They can’t see across the whole swarm, which makes it easier to scale the system. The robots interact locally to make decisions without global information.”

100 robots can coordinate to form a shape within a minute, compared to the hour that it took in some previous approaches. Rubenstein wants his algorithm to be used in both driverless vehicles and automated warehouses. 

“Large companies have warehouses with hundreds of robots doing tasks similar to what our robots do in the lab,” he said. “They need to make sure their robots don’t collide but do move as quickly as possible to reach the spot where they eventually give an object to a human.”

 

Spread the love
Continue Reading