Connect with us


New Method Enables Humans to Help Robots “See” Their Environments



Image: Kavraki Lab

A team of engineers at Rice University has developed a new method that enables humans to help robots “see” their environments and complete various tasks. 

The new strategy is called Bayesian Learning IN the Dark (BLIND), which is a novel solution to the problem of motion planning for robots operating in environments where there are sometimes blind spots. 

The study was led by computer scientists Lydi Kavraki and Vaibhav Unhelkar and co-led by Carlos Quintero-Peña and Constantinos Chamzas of Rice’s George R. Brown School of Engineering. It was presented at the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation.

Human in the Loop 

According to the study, the algorithm keeps a human in the loop to “augment robot perception and, importantly, prevent the execution of unsafe motion.”

The team combined Bayesian inverse reinforcement learning with established motion planning techniques to assist robots with a lot of moving parts. 

To test BLIND, a robot with an articulated arm with seven joints was tasked with grabbing a small cylinder from a table before moving it to another. However, the robot had to first move past a barrier. 

“If you have more joints, instructions to the robot are complicated,” Quintero-Peña said. “If you’re directing a human, you can just say, ‘Lift up your hand.'”

However, a robot requires programs that are specific about the movement of each joint at each point in its trajectory, and this becomes even more important when there are obstacles blocking its “view.” 

Human Guided Motion Planning in Partially Observable Environments


Learning to “See” Around Obstacles

BLIND doesn’t program a trajectory up front. Instead, it inserts a human mid-process to refine the choreographed options suggested by the robot’s algorithm. 

“BLIND allows us to take information in the human’s head and compute our trajectories in this high-degree-of-freedom space,” Quintero-Peña said. “We use a specific way of feedback called critique, basically a binary form of feedback where the human is given labels on pieces of the trajectory.”

The labels appear as connected green dots, representing possible paths. As BLIND goes from dot to dot, the human approves or rejects each movement, refining the path and avoiding obstacles. 

“It’s an easy interface for people to use, because we can say, ‘I like this’ or ‘I don’t like that,’ and the robot uses this information to plan,” Chamzas said. The robot can carry out its task after being rewarded for its movements. 

“One of the most important things here is that human preferences are hard to describe with a mathematical formula,” Quintero-Peña said. “Our work simplifies human-robot relationships by incorporating human preferences. That’s how I think applications will get the most benefit from this work.”

Kavraki has worked with advanced programming for NASA’s humanoid Robonaut aboard the International Space Station. 

“This work wonderfully exemplifies how a little, but targeted, human intervention can significantly enhance the capabilities of robots to execute complex tasks in environments where some parts are completely unknown to the robot but known to the human,” said Kavraki. 

“It shows how methods for human-robot interaction, the topic of research of my colleague Professor Unhelkar, and automated planning pioneered for years at my laboratory can blend to deliver reliable solutions that also respect human preferences.”

Alex McFarland is a Brazil-based writer who covers the latest developments in artificial intelligence & blockchain. He has worked with top AI companies and publications across the globe.