Connect with us

Robotics

Flexible Robot “Grows” Like a Plant

Published

 on

Flexible Robot “Grows” Like a Plant

Engineers from MIT have designed a robot that can extend a chain-like appendage. This makes the robot extremely flexible, and it can configure itself in multiple different ways. At the same time, it is strong enough to support heavy weight or apply torque, making it capable of assembling parts in small spaces. After completing its tasks, the robot is able to retract the appendage, and it can extend it again with a different length and shape. 

This newly developed robot can make a difference in areas like warehouses, where most of the robots are not able to put themselves in narrow spaces. The new plant-like robot can be used to grab products at the back of a shelf, and it can even move around a car’s engine parts to unscrew an oil cap. 

The design was inspired by plants and the way they grow. In that process, nutrients are transported to the plant’s tip as a fluid. Once they reach the tip, they are converted into solid material that produces, a little at a time, a supportive stem. 

The plant-like robot has a “growing point” or gearbox, which draws a loose chain of interlocking blocks into the box. Once there, gears lock the chain units together and release the chain, unit by unit, until it forms a rigid appendage. 

Team of Engineers

The new robot was presented this week at the IEEE International Conference on Intelligent Robots and Systems (IROS) in Macau. In the future, the engineers would like to add on grippers, cameras, and sensors that could be mounted onto the gearbox. This would allow the robot to tighten a loose screw after making its way through an aircraft’s propulsion system. It could also retrieve a product without disturbing anything in the near surroundings. 

Harry Asada is a professor of mechanical engineering at MIT.

“Think about changing the oil in your car,”  Asada says. “After you open the engine roof, you have to be flexible enough to make sharp turns, left and right, to get to the oil filter, and then you have to be strong enough to twist the oil filter cap to remove it.”

Tongxi Yan is a former graduate student in Asada’s lab, and he led the work.

“Now we have a robot that can potentially accomplish such tasks,” he says. “It can grow, retract, and grow again to a different shape, to adapt to its environment.”

The team of engineers also consisted of MIT graduate student Emily Kamienski and visiting scholar Seiichi Teshigawara.

Plant-Like Robot

After defining the different aspects of plant growth, the team looked to implement it into a robot. 

“The realization of the robot is totally different from a real plant, but it exhibits the same kind of functionality, at a certain abstract level,” Asada says.

The gearbox was designed to represent the robot’s “growing tip,” which is the equivalent of a bud of a plant. That is where most nutrients flow up to the site, and the tip builds a rigid stem. The box consists of a system of gears and motors, and they pull up a fluidized material. For this robot, it is a sequence of 3-D printed plastic units that are connected with each other. 

The robot is capable of being programmed to choose which units to lock together and which to leave unlocked. This allows it to form specific shapes and “grow” in specific directions.

“It can be locked in different places to be curved in different ways, and have a wide range of motions,” Yan says.

The chain is able to support a one-pound weight when locked and rigid. If a gripper were to be attached, the researchers believe it would be able to grow long enough to maneuver through a narrow space, and perform tasks such as unscrewing a cap.

 

Spread the love

Robotics

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

mm

Published

on

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

The AI startup Covariant and the industrial robotics company ABB will be partnering to engineer sophisticated robots that can pick up and manipulate a wide variety of objects. These robots will be used in warehouses and other industrial settings.

As Fortune reported, the industrial robotics company ABB is primarily involved in the creation of robotics for car manufacturers, but the company wants to branch out to other sectors. ABB is aiming to become involved in logistics, where its robots will be used in large warehouses, such as those run by Amazon, to manipulate items, package goods, and make shipments.

According to ABB president Sami Atiya, according to Fortune, ABB sought partners that were experienced in the creation of sophisticated computer vision applications. While the company currently uses computer vision algorithms to operate some of its robots, ABB aimed to push the envelope and create reliable, high-dexterity robots capable of maneuvering and manipulating thousands of different objects.

The company examined many different companies before settling on Covariant as its partner. Covariant is a robotics research company whose researchers come from places like OpenAI and the University of California Berekely. Covariant managed to produce the only software examined by ABB that could reliably recognize many different items without the intervention of human operators.

The computer vision and robotics applications developed by Covariant were trained with reinforcement learning. Thanks to deep neural networks and reinforcement learning, Covariant was able to create software that learns through experience and is able to reliably and consistently recognize objects once a pattern has been learned. The CEO of Covariant, Peter Chen, was interviewed by Fortune. Chen explained that as more robotics companies like ABB brain out into new industries and markets, the goal becomes the creation of robots capable of a wider variety of tasks than those currently used in many manufacturing and logistics operations. Most of the robots employed in industrial capacities are only capable of doing a handful of very specific things. Chen explained that the goal is to create robots capable of adaptation.

As a result of the partnership with Covariant, ABB will get insight into the technology that drives Covariant’s AI, and this knowledge could help them better integrate AI into the tech that powers their existing robots. Currently, Covariant is a fairly small operation with only a handful of robots in full-time operational status, spread out across industries like the electronics industry, the pharmaceutical industry, and the apparel industry. However, its collaboration with ABB could cause it to see substantial growth.

The partnership between Covariant and ABB highlights the increasing role of AI startups in the robotics field. Other examples of AI startups collaborating with robotics companies includes the Japanese corporation IHI establishing a partnership with the AI startup Osaro. The joint collaboration also concerned the use of robots to grasp and manipulate objects.

While there is currently a lot of focus on robots automating away human jobs, in some industries there simply aren’t enough humans to do those jobs, to begin with. A recent report about the logistics sector estimates that over half of all logistics companies will face staff shortages over the course of the next five years. There will be a particular shortage of warehouse workers over the next half-decade. The report suggests that causes of the labor shortage within the logistics industry are falling unemployment rates, long hours, tedious work, and low wages.

Spread the love
Continue Reading

Robotics

Researchers Bring Sense of Touch to Robotic Finger

Published

on

Researchers Bring Sense of Touch to Robotic Finger

Researchers at Columbia Engineering have brought a sense of touch to a newly developed robotic finger. It is able to localize touch with extremely high precision over large, multicurved surfaces. The new development puts robotics one step closer to reaching human-like status. 

Matei Ciocarlie is an associate professor in the departments of mechanical engineering and computer science. Ciocarlie led the research in collaboration with Electrical Engineering Professor Ioannis (John) Kymissis. 

“There has long been a gap between stand-alone tactile sensors and fully integrated tactile fingers — tactile sensing is still far from ubiquitous in robotic manipulation,” says Ciocarlie. “In this paper, we have demonstrated a multicurved robotic finger with accurate touch localization and normal force detection over complex 3D surfaces.”

The current methods that are used to integrate touch sensors into robot fingers face many challenges. It is difficult to cover multicurved surfaces, there is a high wire count, and difficulty fitting the sensors into small fingertips, which prevents the use in dexterous hands. The Columbia Engineering team got around these challenges by developing a new approach: they used overlapping signals from light emitters and receivers that are embedded in a transparent waveguide layer covering the functional areas of the finger. 

The team was able to obtain a signal data set that changes in response to deformation of the finger due to touch. They did this by measuring light transport between every emitter and receiver. Useful information, such as contact location and applied normal force, was then extracted from the data through the use of data-driven deep learning methods. The team was able to do this without the use of analytical models. 

Through this method, the research team developed a fully integrated, sensorized robot finger that has a low wire count. It was built through the use of accessible manufacturing methods and can be easily integrated into dexterous hands. 

The study was published online in IEEE/ASME Transactions on Mechatronics

The first part of the project was the use of light to sense touch. There is a layer of transparent silicone underneath the “skin” of the finger, and the team shined light into it from more than 30 LEDs. The finger also has over 30 photodiodes that are responsible for measuring how the light bounces around. As soon as the finger comes into contact with something, the skin deforms and light moves around in the transparent layer underneath the skin. The researchers then measure how much light goes from every LED to every diode in order to come up with about 1,000 signals. Each one of those signals contains information about the contact made.

“The human finger provides incredibly rich contact information — more than 400 tiny touch sensors in every square centimeter of skin!” says Ciocarlie. “That was the model that pushed us to try and get as much data as possible from our finger. It was critical to be sure all contacts on all sides of the finger were covered — we essentially built a tactile robot finger with no blind spots.”

The second part of the project was the team designing the data to be processed by machine learning algorithms. The data is extremely complex and cannot be interpreted by humans. However, current machine learning techniques can learn to extract specific information, such as where the finger is being touched, what is touching the finger, and how much force is being applied. 

“Our results show that a deep neural network can extract this information with very high accuracy,” says Kymissis. “Our device is truly a tactile finger designed from the very beginning to be used in conjunction with AI algorithms.”

The team also designed the finger so that it can be used on robotic hands. The finger is able to collect nearly 1,000 signals, but it only requires one 14-wire cable connecting it to the hand. There are also no complex off-board electronics required for it to function. 

The team currently has two dexterous hands that are being integrated with the fingers, and they will look to use the hands to demonstrate dexterous manipulation abilities.

“Dexterous robotic manipulation is needed now in fields such as manufacturing and logistics, and is one of the technologies that, in the longer term, are needed to enable personal robotic assistance in other areas, such as healthcare or service domains,” says Ciocarlie.

 

Spread the love
Continue Reading

Robotics

Swarm Robots Help Self-Driving Cars Avoid Collisions

Published

on

Swarm Robots Help Self-Driving Cars Avoid Collisions

The top priority for companies developing self-driving vehicles is that they can safely navigate and avoid crashing or causing traffic jams. Northwestern University has brought that reality one step closer with the development of the first decentralized algorithm with a collision-free, deadlock-free guarantee. 

The algorithm was tested by the researchers in a simulation of 1,024 robots, as well as a swarm of 100 real robots in the lab. Within a minute, the robots were able to reliably, safely, and efficiently converge to form a predetermined shape in less than a minute. 

Northwestern’s Michael Rubenstein led the study. He is the Lisa Wissner-Slivka and Benjamin Slivka Professor in Computer Science in Northwestern’s McCormick School of Engineering. 

“If you have many autonomous vehicles on the road, you don’t want them to collide with one another or get stuck in a deadlock,” said Rubenstein. “By understanding how to control our swarm robots to form shapes, we can understand how to control fleets of autonomous vehicles as they interact with each other.”

The paper is set to be published in the journal IEEE Transactions on Robotics later this month. 

There is an advantage in using a swarm of small robots compared to one large robot or a swarm led by one robot; there is a lack of centralized control. Centralized control can become a major reason for failure, and Rubenstein’s decentralized algorithm acts as a fail-safe. 

“If the system is centralized and a robot stops working, then the entire system fails,” Rubenstein said. “In a decentralized system, there is no leader telling all the other robots what to do. Each robot makes its own decisions. If one robot fails in a swarm, the swarm can still accomplish the task.”

In order to avoid collisions and jams, the robots coordinate with each other. The ground beneath the robots acts as a grid for the algorithm, and each robot is aware of its position on the grid due to technology similar to GPS. 

Prior to undertaking movement from one spot to another, each robot relies on sensors to communicate with the others. By doing this, it is able to determine whether or not other spaces on the grid are vacant or occupied. 

“The robots refuse to move to a spot until that spot is free and until they know that no other robots are moving to that same spot,” Rubenstein said. “They are careful and reserve a space ahead of time.”

The robots are able to communicate with each other in order to form a shape, and this is possible due to the near-sightedness of the robots. 

“Each robot can only sense three or four of its closest neighbors,” Rubenstein explained. “They can’t see across the whole swarm, which makes it easier to scale the system. The robots interact locally to make decisions without global information.”

100 robots can coordinate to form a shape within a minute, compared to the hour that it took in some previous approaches. Rubenstein wants his algorithm to be used in both driverless vehicles and automated warehouses. 

“Large companies have warehouses with hundreds of robots doing tasks similar to what our robots do in the lab,” he said. “They need to make sure their robots don’t collide but do move as quickly as possible to reach the spot where they eventually give an object to a human.”

 

Spread the love
Continue Reading