Connect with us

Robotics

Researchers Develop Autonomous Systems Capable of Sensing Changes in Shadows

Published

 on

Researchers Develop Autonomous Systems Capable of Sensing Changes in Shadows

Engineers at MIT have developed a new system that is extremely important for autonomous vehicles and their safety. The system is capable of sensing small changes in shadows on the ground, and it can determine if there are any moving objects around the corner. 

One of the major goals for any company seeking to create autonomous vehicles is increased safety. Engineers are constantly working on making the vehicles better at avoiding collisions with other cars or pedestrians, especially those that are coming around a building’s corner. 

The new system also has the potential to be used on eventual robots that navigate hospitals. These robots could deliver medication or supplies throughout the hospital, and the system would help them avoid hitting people. 

A paper will be presented next week at the International Conference on Intelligent Robots and Systems (IROS). It includes descriptions of the successful experiments conducted by the researchers, including an autonomous car maneuvering around a parking garage and stopping when approaching another vehicle.

The current system is often LIDAR, which is able to detect visible objects by more than a half of a second. According to the researchers, fractions of a second can make a huge difference in fast-moving autonomous vehicles.  

“For applications where robots are moving around environments with other moving objects or people, our method can give the robot an early warning that somebody is coming around the corner, so the vehicle can slow down, adapt its path, and prepare in advance to avoid a collision,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The big dream is to provide ‘X-ray vision’ of sorts to vehicles moving fast on the streets.”

The new autonomous system has only been tested indoors. In these conditions, lighting conditions are lower, and the robotic speeds are slower. The autonomous system can analyze and sense shadows much easier in this environment. 

The paper was compiled by Daniela Rus; first author Felix Naser, who is a former CSAIL researcher; Alexander Amini, a CSAIL graduate student; Igor Gilitschenski, a CSAIL postdoc; graduate Christina Liao; Guy Rosman of the Toyota Research Institute; and Sertac Karaman, associate professor of aeronautics and astronautics at MIT. 

ShadowCam System

Prior to the new developments, the researchers already had a system called “ShadowCam.” The system is able to identify and classify changes in shadows on the ground through the use of computer-vision techniques. The earlier versions of the system were developed by MIT professors William Freeman and Antonio Torralba. The two professors were not co-authors on the IROS paper, and their work was presented in 2017 and 2018. 

ShadowCam utilizes video frames from a target-specific camera, and it is able to detect any changes in light intensity over time. This tells the system if something is moving further away or getting closer, and it then analyzes the information and classifies each image as a stationary object or moving one. This allows the system to proceed in the best possible way. 

The ShadowCam was tweaked and changed to be used on autonomous vehicles. Originally, it used augmented reality labels termed “AprilTags,” which were like QR codes. ShadowCam used these to focus on certain clusters of pixels to determine if there were any shadows present. However, this system proved to be impossible to utilize in real-world scenarios. 

Because of this, the researchers created a new process that uses image registration and a visual-odometry technique together. Image registration overlays multiple images in order to identify any variations. 

The visual-odometry technique that the researchers use is called “Direct Sparse Odometry” (DSO), and it operates similarly to the AprilTags. DSO uses a 3D-print cloud, and it plots the different features of an environment on it. A computer-vision pipeline then locates a region of interest such as a floor. 

ShadowCam used DSO-image-registration and overlays all of the images from the same viewpoint of the robot. The robot, moving or staying still, is then able to zero in on the same patch of pixels where there is a shadow. 

What’s Next

The researchers will continue to work on this system, and they will focus on the differences between indoor and outdoor lighting conditions. Ultimately, the team wants to increase the speed of the system as well as automate the process. 

 

Spread the love

Deep Learning Specialization on Coursera

Robotics

Study Suggests Robots Are More Persuasive When They Pretend To Be Human

mm

Published

on

Study Suggests Robots Are More Persuasive When They Pretend To Be Human

Advances in artificial intelligence have created bots and machines that can potentially pass as humans if they interact with people exclusively through a digital medium. Recently, a team of computer science researchers have studied how robots/machines and humans interact when the humans believe that the robots are also human. As reported by ScienceDaily, the results of the study found that people find robots/chatbots more persuasive when they believe the bots are human.

Talal Rahwan, the associate professor of Computer Science at NYU Abu Dhabi, has recently led a study that examined how robots and humans interact with each other. The results of the experiment were published in Nature Machine Intelligence in a report called Transparency-Efficiency Tradeoff in Human-Machine Cooperation. During the course of the study, test subjects were instructed to play a cooperative game with a partner, and the partner may be either a human or a bot.

The game was a twist on the classic Prisoner’s Dilemma, where participants must decide whether or not to cooperate or betray the other on every round. In a prisoner’s dilemma, one side may choose to defect and betray their partner to achieve a benefit at cost to the other player, and only by cooperating can both sides assure themselves of gain.

The researchers manipulated their test subjects by providing them with either correct or incorrect information about the identity of their partner. Some of the participants were told that they were playing with a bot, even though their partner was actually human. Other participants were in the inverse situation. Over the course of the experiment, the research team was able to quantify if people treated partners differently when they were told their partners were bots. The researchers tracked the degree to which any prejudice against the bots existed, and how these attitudes impacted interactions with bots who identified themselves.

The results of the experiment demonstrated that bots were more effective at engendering cooperation from their partners when the human believed that the bot was also a human. However, when it was revealed that the bot was a bot, cooperation levels dropped. Rahwan explained that while many scientists and ethicists agree that AI should be transparent regarding how decisions are made, it’s less clear that they should also be transparent about their nature when communicating with others.

Last year, Google Duplex made a splash when a stage demo showed that it was capable of making phone calls and booking appointments on behalf of its use, generating human-like speech so sophisticated that many people would have mistaken it for a real person had they not been told they were speaking to a bot. Since the debut of Google Duplex, many AI and robot ethicists voiced their concerns over the technology, prompting Google to say that it would have the agent identify itself as a bot in the future. Currently, Google Duplex is only being used in a very limited capacity. It will soon see use in New Zealand, but only to check for the operating hours of businesses. Ethicists are still worried about the degree to which the technology could be misused.

Rahawan argues that the recent study demonstrates that we should consider what costs we are willing to pay in return for transparency:

“Is it ethical to develop such a system? Should we prohibit bots from passing as humans, and force them to be transparent about who they are? If the answer is ‘Yes’, then our findings highlight the need to set standards for the efficiency cost that we are willing to pay in return for such transparency.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Robotics

Flexible Robot “Grows” Like a Plant

Published

on

Flexible Robot “Grows” Like a Plant

Engineers from MIT have designed a robot that can extend a chain-like appendage. This makes the robot extremely flexible, and it can configure itself in multiple different ways. At the same time, it is strong enough to support heavy weight or apply torque, making it capable of assembling parts in small spaces. After completing its tasks, the robot is able to retract the appendage, and it can extend it again with a different length and shape. 

This newly developed robot can make a difference in areas like warehouses, where most of the robots are not able to put themselves in narrow spaces. The new plant-like robot can be used to grab products at the back of a shelf, and it can even move around a car’s engine parts to unscrew an oil cap. 

The design was inspired by plants and the way they grow. In that process, nutrients are transported to the plant’s tip as a fluid. Once they reach the tip, they are converted into solid material that produces, a little at a time, a supportive stem. 

The plant-like robot has a “growing point” or gearbox, which draws a loose chain of interlocking blocks into the box. Once there, gears lock the chain units together and release the chain, unit by unit, until it forms a rigid appendage. 

Team of Engineers

The new robot was presented this week at the IEEE International Conference on Intelligent Robots and Systems (IROS) in Macau. In the future, the engineers would like to add on grippers, cameras, and sensors that could be mounted onto the gearbox. This would allow the robot to tighten a loose screw after making its way through an aircraft’s propulsion system. It could also retrieve a product without disturbing anything in the near surroundings. 

Harry Asada is a professor of mechanical engineering at MIT.

“Think about changing the oil in your car,”  Asada says. “After you open the engine roof, you have to be flexible enough to make sharp turns, left and right, to get to the oil filter, and then you have to be strong enough to twist the oil filter cap to remove it.”

Tongxi Yan is a former graduate student in Asada’s lab, and he led the work.

“Now we have a robot that can potentially accomplish such tasks,” he says. “It can grow, retract, and grow again to a different shape, to adapt to its environment.”

The team of engineers also consisted of MIT graduate student Emily Kamienski and visiting scholar Seiichi Teshigawara.

Plant-Like Robot

After defining the different aspects of plant growth, the team looked to implement it into a robot. 

“The realization of the robot is totally different from a real plant, but it exhibits the same kind of functionality, at a certain abstract level,” Asada says.

The gearbox was designed to represent the robot’s “growing tip,” which is the equivalent of a bud of a plant. That is where most nutrients flow up to the site, and the tip builds a rigid stem. The box consists of a system of gears and motors, and they pull up a fluidized material. For this robot, it is a sequence of 3-D printed plastic units that are connected with each other. 

The robot is capable of being programmed to choose which units to lock together and which to leave unlocked. This allows it to form specific shapes and “grow” in specific directions.

“It can be locked in different places to be curved in different ways, and have a wide range of motions,” Yan says.

The chain is able to support a one-pound weight when locked and rigid. If a gripper were to be attached, the researchers believe it would be able to grow long enough to maneuver through a narrow space, and perform tasks such as unscrewing a cap.

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Robotics

Researchers Develop Resilient RoboBee with Soft Muscles

Published

on

Researchers Develop Resilient RoboBee with Soft Muscles

Researchers at the Harvard Microrobotics Laboratory at the Harvard John A. Paulson School of Engineering and Applied Science (SEAS), along with the Wyss Institute for Biologically Inspired Engineering, have developed a RoboBee powered by soft artificial muscles. The microrobot is capable of crashing into walls, falling on the ground, and colliding with other RoboBees without suffering damage. In what is a big moment for robotics, the RoboBee is the first microrobot powered by soft actuators that is able to achieve controlled flight. 

Yufeng Chen is first author of the paper and a former graduate student and postdoctoral fellow at SEAS.

“There has been a big push in the field of microrobotics to make mobile robots out of soft actuators because they are so resilient,” said Chen. “However, many people in the field have been skeptical that they could be used for flying robots because the power density of those actuators simply hasn’t been high enough and they are notoriously difficult to control. Our actuator has high enough power density and controllability to achieve hovering flight.”

The research was published in Nature.

Issues Encountered

One of the problems that the researchers dealt with was power density. They looked to the electrically-driven soft actuators that were developed in the lab of David Clarke, the Extended Tarr Family Professor of Materials. The soft actuators are created by using dielectric elastomers, which are soft materials that have strong insulating properties. When an electric field is applied, the dielectric elastomers deform. 

After improving the electrode conductivity, the actuator was able to be operated at 500 Hertz. This is similar to previously used rigid actuators in robots. 

One of the other issues with soft actuators is that the system often becomes unstable. To get past this, the researchers developed a lightweight airframe. It consisted of a piece of vertical constraining thread in order to prevent the actuator from buckling. 

Flight Capability

Within the small scale robots, the soft actuators are able to be swapped out and assembled easily. The researchers developed multiple different models of the soft-powered RoboBee in order to showcase the various flight capabilities. 

One of the models has two wings, and it can take off from the ground. However, this model has no further control. A four-wing, two actuator model is capable of flying in a crowded environment. Within a single flight, the RoboBee is able to avoid multiple collisions.

Elizabeth Farrell Helbling is a former graduate student at SEAS, and she co-authored the paper. 

“One advantage of small-scale, low-mass robots is their resilience to external impacts,” she said. “The soft actuator provides an additional benefit because it can absorb impact better than traditional actuation strategies. This would come in handy in potential applications such as flying through rubble for search and rescue missions.”

Another model is the eight-wing, four-actuator RoboBee. It is capable of performing controlled hovering flight, which is the first time it has been demonstrated by a soft-powered flying microrobot. 

What’s Next?

The researchers are now looking to increase the efficiency of the soft-powered RoboBee. It still has a long way to go before catching up to traditional flying robots. 

Robert Wood is a Charles River Professor of Engineering and Applied Sciences in SEAS. He is also a core faculty member of the Wyss Institute for Biologically Inspired Engineering and senior author of the paper. 

“Soft actuators with muscle-like properties and electrical activation represent a grand challenge in robotics,” says Professor Wood.  “If we could engineer high-performance artificial muscles, the sky is the limit for what robots we could build.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading