Connect with us

Robotics

Researchers Develop Process Flow to Guide 3D Printing in Soft Robotics Field

Published

 on

Researchers Develop Process Flow to Guide 3D Printing in Soft Robotics Field

Soft robotics is a growing field within Artificial Intelligence. These systems are able to safely adapt to complex environments, and they can have various designs and length scales, from meters to sub micrometers. 

The soft robots that are on the millimeter scale have special importance, since they are able to consist of a combination of miniature actuators controlled by pneumatic pressure. These soft robots are useful for navigating complex confined areas, and the manipulation of small objects. 

One of the consequences of scaling down soft pneumatic robots to millimeters is that they then have finer features. These are reduced by more than one order of magnitude. This design requires a great amount of delicacy when creating them through traditional means such as molding and soft lithography. There are some new technologies like digital light processing (DLP) that produce high theoretical resolutions, but it is still hard to do without clogging. Successful examples of 3D printing miniature soft pneumatic robots do not happen often. 

Researchers from Singapore and China, mainly from the Singapore University of Technology and Design (SUTD), Southern University of Science and Technology (SUSTech), and Zhejiang University (ZJU), have created a generic process flow to guide DLP 3D printing of miniature pneumatic actuators for soft robots. These have an overall size of 2-15 mm. The research was published in Advanced Materials Technologies

“We leveraged the high efficiency and resolution of DLP 3D printing to fabricate miniature soft robotic actuators,” said Associate Professor Qi (Kevin) Ge from SUSTech, lead researcher of the research project. “To ensure reliable printing fidelity and mechanical performance in the printed products, we introduced a new paradigm for systematic and efficient tailoring of the material formulation and key processing parameters.”

The way DLP 3D printing works is photo-absorbers are added into polymer solutions. This enhances the printing resolutions in lateral and vertical directions. Increasing the amount will cause rapid degradation in the material’s elasticity. The elasticity is extremely important for soft robots to sustain large deformations. 

“To achieve a reasonable trade-off, we first selected a photo-absorber with good absorbance at the wavelength of the projected UV light and determined the appropriate material formulation based on mechanical performance tests. Next, we characterized the curing depth and XY fidelity to identify the suitable combination of exposure time and sliced layer thickness,” explained co-first author Yuan-Fang Zhang from SUTD.

“By following this process flow, we are able to produce an assortment of miniature soft pneumatic robotic actuators with various structures and morphing modes, all smaller than a one Singapore Dollar coin, on a self-built multimaterial 3D printing system. The same methodology should be compatible with commercial stereolithography (SLA) or DLP 3D printers as no hardware modification is required,” said corresponding author Professor Qi Ge from SUSTech.

On top of all of this, the researchers also developed a soft debris remover that has a continuum manipulator and a 3D printed miniature soft pneumatic gripper. It is capable of navigating through a confined space and collecting small objects that are in difficult places to reach. 

These new developments will help in the process of 3D printing miniature soft robots with complex geometries and sophisticated multimaterial designs. The integration of printed miniature soft pneumatic actuators into a robotic system will provide many opportunities. These new technologies can be applied to applications like jet-engine maintenance and minimally invasive surgery, and they will continue to be developed so that they can benefit many more areas. 

See more from the Singapore University of Technology and Design, where you can find information on current research being done in these fields. 

 

Spread the love

Robotics

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

mm

Published

on

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

The AI startup Covariant and the industrial robotics company ABB will be partnering to engineer sophisticated robots that can pick up and manipulate a wide variety of objects. These robots will be used in warehouses and other industrial settings.

As Fortune reported, the industrial robotics company ABB is primarily involved in the creation of robotics for car manufacturers, but the company wants to branch out to other sectors. ABB is aiming to become involved in logistics, where its robots will be used in large warehouses, such as those run by Amazon, to manipulate items, package goods, and make shipments.

According to ABB president Sami Atiya, according to Fortune, ABB sought partners that were experienced in the creation of sophisticated computer vision applications. While the company currently uses computer vision algorithms to operate some of its robots, ABB aimed to push the envelope and create reliable, high-dexterity robots capable of maneuvering and manipulating thousands of different objects.

The company examined many different companies before settling on Covariant as its partner. Covariant is a robotics research company whose researchers come from places like OpenAI and the University of California Berekely. Covariant managed to produce the only software examined by ABB that could reliably recognize many different items without the intervention of human operators.

The computer vision and robotics applications developed by Covariant were trained with reinforcement learning. Thanks to deep neural networks and reinforcement learning, Covariant was able to create software that learns through experience and is able to reliably and consistently recognize objects once a pattern has been learned. The CEO of Covariant, Peter Chen, was interviewed by Fortune. Chen explained that as more robotics companies like ABB brain out into new industries and markets, the goal becomes the creation of robots capable of a wider variety of tasks than those currently used in many manufacturing and logistics operations. Most of the robots employed in industrial capacities are only capable of doing a handful of very specific things. Chen explained that the goal is to create robots capable of adaptation.

As a result of the partnership with Covariant, ABB will get insight into the technology that drives Covariant’s AI, and this knowledge could help them better integrate AI into the tech that powers their existing robots. Currently, Covariant is a fairly small operation with only a handful of robots in full-time operational status, spread out across industries like the electronics industry, the pharmaceutical industry, and the apparel industry. However, its collaboration with ABB could cause it to see substantial growth.

The partnership between Covariant and ABB highlights the increasing role of AI startups in the robotics field. Other examples of AI startups collaborating with robotics companies includes the Japanese corporation IHI establishing a partnership with the AI startup Osaro. The joint collaboration also concerned the use of robots to grasp and manipulate objects.

While there is currently a lot of focus on robots automating away human jobs, in some industries there simply aren’t enough humans to do those jobs, to begin with. A recent report about the logistics sector estimates that over half of all logistics companies will face staff shortages over the course of the next five years. There will be a particular shortage of warehouse workers over the next half-decade. The report suggests that causes of the labor shortage within the logistics industry are falling unemployment rates, long hours, tedious work, and low wages.

Spread the love
Continue Reading

Robotics

Researchers Bring Sense of Touch to Robotic Finger

Published

on

Researchers Bring Sense of Touch to Robotic Finger

Researchers at Columbia Engineering have brought a sense of touch to a newly developed robotic finger. It is able to localize touch with extremely high precision over large, multicurved surfaces. The new development puts robotics one step closer to reaching human-like status. 

Matei Ciocarlie is an associate professor in the departments of mechanical engineering and computer science. Ciocarlie led the research in collaboration with Electrical Engineering Professor Ioannis (John) Kymissis. 

“There has long been a gap between stand-alone tactile sensors and fully integrated tactile fingers — tactile sensing is still far from ubiquitous in robotic manipulation,” says Ciocarlie. “In this paper, we have demonstrated a multicurved robotic finger with accurate touch localization and normal force detection over complex 3D surfaces.”

The current methods that are used to integrate touch sensors into robot fingers face many challenges. It is difficult to cover multicurved surfaces, there is a high wire count, and difficulty fitting the sensors into small fingertips, which prevents the use in dexterous hands. The Columbia Engineering team got around these challenges by developing a new approach: they used overlapping signals from light emitters and receivers that are embedded in a transparent waveguide layer covering the functional areas of the finger. 

The team was able to obtain a signal data set that changes in response to deformation of the finger due to touch. They did this by measuring light transport between every emitter and receiver. Useful information, such as contact location and applied normal force, was then extracted from the data through the use of data-driven deep learning methods. The team was able to do this without the use of analytical models. 

Through this method, the research team developed a fully integrated, sensorized robot finger that has a low wire count. It was built through the use of accessible manufacturing methods and can be easily integrated into dexterous hands. 

The study was published online in IEEE/ASME Transactions on Mechatronics

The first part of the project was the use of light to sense touch. There is a layer of transparent silicone underneath the “skin” of the finger, and the team shined light into it from more than 30 LEDs. The finger also has over 30 photodiodes that are responsible for measuring how the light bounces around. As soon as the finger comes into contact with something, the skin deforms and light moves around in the transparent layer underneath the skin. The researchers then measure how much light goes from every LED to every diode in order to come up with about 1,000 signals. Each one of those signals contains information about the contact made.

“The human finger provides incredibly rich contact information — more than 400 tiny touch sensors in every square centimeter of skin!” says Ciocarlie. “That was the model that pushed us to try and get as much data as possible from our finger. It was critical to be sure all contacts on all sides of the finger were covered — we essentially built a tactile robot finger with no blind spots.”

The second part of the project was the team designing the data to be processed by machine learning algorithms. The data is extremely complex and cannot be interpreted by humans. However, current machine learning techniques can learn to extract specific information, such as where the finger is being touched, what is touching the finger, and how much force is being applied. 

“Our results show that a deep neural network can extract this information with very high accuracy,” says Kymissis. “Our device is truly a tactile finger designed from the very beginning to be used in conjunction with AI algorithms.”

The team also designed the finger so that it can be used on robotic hands. The finger is able to collect nearly 1,000 signals, but it only requires one 14-wire cable connecting it to the hand. There are also no complex off-board electronics required for it to function. 

The team currently has two dexterous hands that are being integrated with the fingers, and they will look to use the hands to demonstrate dexterous manipulation abilities.

“Dexterous robotic manipulation is needed now in fields such as manufacturing and logistics, and is one of the technologies that, in the longer term, are needed to enable personal robotic assistance in other areas, such as healthcare or service domains,” says Ciocarlie.

 

Spread the love
Continue Reading

Robotics

Swarm Robots Help Self-Driving Cars Avoid Collisions

Published

on

Swarm Robots Help Self-Driving Cars Avoid Collisions

The top priority for companies developing self-driving vehicles is that they can safely navigate and avoid crashing or causing traffic jams. Northwestern University has brought that reality one step closer with the development of the first decentralized algorithm with a collision-free, deadlock-free guarantee. 

The algorithm was tested by the researchers in a simulation of 1,024 robots, as well as a swarm of 100 real robots in the lab. Within a minute, the robots were able to reliably, safely, and efficiently converge to form a predetermined shape in less than a minute. 

Northwestern’s Michael Rubenstein led the study. He is the Lisa Wissner-Slivka and Benjamin Slivka Professor in Computer Science in Northwestern’s McCormick School of Engineering. 

“If you have many autonomous vehicles on the road, you don’t want them to collide with one another or get stuck in a deadlock,” said Rubenstein. “By understanding how to control our swarm robots to form shapes, we can understand how to control fleets of autonomous vehicles as they interact with each other.”

The paper is set to be published in the journal IEEE Transactions on Robotics later this month. 

There is an advantage in using a swarm of small robots compared to one large robot or a swarm led by one robot; there is a lack of centralized control. Centralized control can become a major reason for failure, and Rubenstein’s decentralized algorithm acts as a fail-safe. 

“If the system is centralized and a robot stops working, then the entire system fails,” Rubenstein said. “In a decentralized system, there is no leader telling all the other robots what to do. Each robot makes its own decisions. If one robot fails in a swarm, the swarm can still accomplish the task.”

In order to avoid collisions and jams, the robots coordinate with each other. The ground beneath the robots acts as a grid for the algorithm, and each robot is aware of its position on the grid due to technology similar to GPS. 

Prior to undertaking movement from one spot to another, each robot relies on sensors to communicate with the others. By doing this, it is able to determine whether or not other spaces on the grid are vacant or occupied. 

“The robots refuse to move to a spot until that spot is free and until they know that no other robots are moving to that same spot,” Rubenstein said. “They are careful and reserve a space ahead of time.”

The robots are able to communicate with each other in order to form a shape, and this is possible due to the near-sightedness of the robots. 

“Each robot can only sense three or four of its closest neighbors,” Rubenstein explained. “They can’t see across the whole swarm, which makes it easier to scale the system. The robots interact locally to make decisions without global information.”

100 robots can coordinate to form a shape within a minute, compared to the hour that it took in some previous approaches. Rubenstein wants his algorithm to be used in both driverless vehicles and automated warehouses. 

“Large companies have warehouses with hundreds of robots doing tasks similar to what our robots do in the lab,” he said. “They need to make sure their robots don’t collide but do move as quickly as possible to reach the spot where they eventually give an object to a human.”

 

Spread the love
Continue Reading