Connect with us

Robotics

Researchers Build Robot from Multiple 3D-Printed Smaller Ones

Published

 on

Researchers Build Robot from Multiple 3D-Printed Smaller Ones

Researchers from Georgia Institute of Technology have built a robot that consists entirely of smaller ones known as “smarticles.” This new locomotion technique challenges the conventional way of creating robots from motors, batteries, actuators, body segments, legs, and wheels. 

The new research was supported by the Army Research Office, the National Science Foundation, and researchers from Northwestern University. It was published in the journal Science Robotics. 

These 3D-printed smart active particles can only perform the function of flapping their two arms. The potential of these smarticles changes whenever there are five of them confined in a circle. When together, they form a robophysical system called a “supersmarticle” by nudging one another.  This supersdmarticle is then able to move independently. If a sound or light sensor is added, it is then able to be controlled through stimulus. 

This new system is still in its infancy, but the idea of creating robots from grouping together smaller ones has huge potential. It provides group capabilities, and it could lead to mechanically based control over small robots. The grouping of smaller robots could also lead to a new locomotion.

Dan Goldman is a Dunn Family Professor in the School of Physics at the Georgia Institute of Technology. 

“These are very rudimentary robots whose behavior is dominated by mechanics and the laws of physics,” he said. “We are not looking to put sophisticated control, sensing and computation on them all. As robots become smaller and smaller, we’ll have to use mechanics and physics principles to control them because they won’t have the level of computation and sensing we would need for conventional control.”

The research was built upon the study of construction staples that were poured into a container with removable sides. A former PhD student and now a faculty member at the University of California San Diego, Nick Gravish, then removed the container’s walls to create structures that could stand alone. He realized that mechanical objects could be put together in order to create structures that could do a lot more than their individual components. 

“A robot made of other rudimentary robots became the vision,” Goldman said. “You could imagine making a robot in which you would tweak its geometric parameters a bit and what emerges is qualitatively new behaviors.”

Will Savoie, a graduate research assistant, created battery-powered smarticles from a 3D printer. They had motors, simple sensors, and some computing power. Individually, the smarticles can’t do much, but they are able to change location and interact with one another when put in a ring. 

“Even though no individual robot could move on its own, the cloud composed of multiple robots could move as it pushed itself apart and shrink as it pulled itself together,” according to Goldman. “If you put a ring around the cloud of little robots, they start kicking each other around and the larger ring — what we call a supersmarticle — moves around randomly.”

The researchers also learned that they could control the movement of the robots by using photo sensors. They were able to stop the arms from flapping with a beam of light. 

“If you angle the flashlight just right, you can highlight the robot you want to be inactive, and that causes the ring to lurch toward or away from it, even though no robots are programmed to move toward the light,” Goldman explained. “That allowed steering of the ensemble in a very rudimentary, stochastic way.”

These new developments will help in the creation of swarm robots that are made up of multiple smaller devices. They could be altered so that they are useful in a variety of different situations and applications. The U.S. Army has also taken an interest in the new project since it could help create shape-changing robots which can alter their modalities and functions. 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Robotics

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

mm

Published

on

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

The AI startup Covariant and the industrial robotics company ABB will be partnering to engineer sophisticated robots that can pick up and manipulate a wide variety of objects. These robots will be used in warehouses and other industrial settings.

As Fortune reported, the industrial robotics company ABB is primarily involved in the creation of robotics for car manufacturers, but the company wants to branch out to other sectors. ABB is aiming to become involved in logistics, where its robots will be used in large warehouses, such as those run by Amazon, to manipulate items, package goods, and make shipments.

According to ABB president Sami Atiya, according to Fortune, ABB sought partners that were experienced in the creation of sophisticated computer vision applications. While the company currently uses computer vision algorithms to operate some of its robots, ABB aimed to push the envelope and create reliable, high-dexterity robots capable of maneuvering and manipulating thousands of different objects.

The company examined many different companies before settling on Covariant as its partner. Covariant is a robotics research company whose researchers come from places like OpenAI and the University of California Berekely. Covariant managed to produce the only software examined by ABB that could reliably recognize many different items without the intervention of human operators.

The computer vision and robotics applications developed by Covariant were trained with reinforcement learning. Thanks to deep neural networks and reinforcement learning, Covariant was able to create software that learns through experience and is able to reliably and consistently recognize objects once a pattern has been learned. The CEO of Covariant, Peter Chen, was interviewed by Fortune. Chen explained that as more robotics companies like ABB brain out into new industries and markets, the goal becomes the creation of robots capable of a wider variety of tasks than those currently used in many manufacturing and logistics operations. Most of the robots employed in industrial capacities are only capable of doing a handful of very specific things. Chen explained that the goal is to create robots capable of adaptation.

As a result of the partnership with Covariant, ABB will get insight into the technology that drives Covariant’s AI, and this knowledge could help them better integrate AI into the tech that powers their existing robots. Currently, Covariant is a fairly small operation with only a handful of robots in full-time operational status, spread out across industries like the electronics industry, the pharmaceutical industry, and the apparel industry. However, its collaboration with ABB could cause it to see substantial growth.

The partnership between Covariant and ABB highlights the increasing role of AI startups in the robotics field. Other examples of AI startups collaborating with robotics companies includes the Japanese corporation IHI establishing a partnership with the AI startup Osaro. The joint collaboration also concerned the use of robots to grasp and manipulate objects.

While there is currently a lot of focus on robots automating away human jobs, in some industries there simply aren’t enough humans to do those jobs, to begin with. A recent report about the logistics sector estimates that over half of all logistics companies will face staff shortages over the course of the next five years. There will be a particular shortage of warehouse workers over the next half-decade. The report suggests that causes of the labor shortage within the logistics industry are falling unemployment rates, long hours, tedious work, and low wages.

Spread the love
Continue Reading

Robotics

Researchers Bring Sense of Touch to Robotic Finger

Published

on

Researchers Bring Sense of Touch to Robotic Finger

Researchers at Columbia Engineering have brought a sense of touch to a newly developed robotic finger. It is able to localize touch with extremely high precision over large, multicurved surfaces. The new development puts robotics one step closer to reaching human-like status. 

Matei Ciocarlie is an associate professor in the departments of mechanical engineering and computer science. Ciocarlie led the research in collaboration with Electrical Engineering Professor Ioannis (John) Kymissis. 

“There has long been a gap between stand-alone tactile sensors and fully integrated tactile fingers — tactile sensing is still far from ubiquitous in robotic manipulation,” says Ciocarlie. “In this paper, we have demonstrated a multicurved robotic finger with accurate touch localization and normal force detection over complex 3D surfaces.”

The current methods that are used to integrate touch sensors into robot fingers face many challenges. It is difficult to cover multicurved surfaces, there is a high wire count, and difficulty fitting the sensors into small fingertips, which prevents the use in dexterous hands. The Columbia Engineering team got around these challenges by developing a new approach: they used overlapping signals from light emitters and receivers that are embedded in a transparent waveguide layer covering the functional areas of the finger. 

The team was able to obtain a signal data set that changes in response to deformation of the finger due to touch. They did this by measuring light transport between every emitter and receiver. Useful information, such as contact location and applied normal force, was then extracted from the data through the use of data-driven deep learning methods. The team was able to do this without the use of analytical models. 

Through this method, the research team developed a fully integrated, sensorized robot finger that has a low wire count. It was built through the use of accessible manufacturing methods and can be easily integrated into dexterous hands. 

The study was published online in IEEE/ASME Transactions on Mechatronics

The first part of the project was the use of light to sense touch. There is a layer of transparent silicone underneath the “skin” of the finger, and the team shined light into it from more than 30 LEDs. The finger also has over 30 photodiodes that are responsible for measuring how the light bounces around. As soon as the finger comes into contact with something, the skin deforms and light moves around in the transparent layer underneath the skin. The researchers then measure how much light goes from every LED to every diode in order to come up with about 1,000 signals. Each one of those signals contains information about the contact made.

“The human finger provides incredibly rich contact information — more than 400 tiny touch sensors in every square centimeter of skin!” says Ciocarlie. “That was the model that pushed us to try and get as much data as possible from our finger. It was critical to be sure all contacts on all sides of the finger were covered — we essentially built a tactile robot finger with no blind spots.”

The second part of the project was the team designing the data to be processed by machine learning algorithms. The data is extremely complex and cannot be interpreted by humans. However, current machine learning techniques can learn to extract specific information, such as where the finger is being touched, what is touching the finger, and how much force is being applied. 

“Our results show that a deep neural network can extract this information with very high accuracy,” says Kymissis. “Our device is truly a tactile finger designed from the very beginning to be used in conjunction with AI algorithms.”

The team also designed the finger so that it can be used on robotic hands. The finger is able to collect nearly 1,000 signals, but it only requires one 14-wire cable connecting it to the hand. There are also no complex off-board electronics required for it to function. 

The team currently has two dexterous hands that are being integrated with the fingers, and they will look to use the hands to demonstrate dexterous manipulation abilities.

“Dexterous robotic manipulation is needed now in fields such as manufacturing and logistics, and is one of the technologies that, in the longer term, are needed to enable personal robotic assistance in other areas, such as healthcare or service domains,” says Ciocarlie.

 

Spread the love
Continue Reading

Robotics

Swarm Robots Help Self-Driving Cars Avoid Collisions

Published

on

Swarm Robots Help Self-Driving Cars Avoid Collisions

The top priority for companies developing self-driving vehicles is that they can safely navigate and avoid crashing or causing traffic jams. Northwestern University has brought that reality one step closer with the development of the first decentralized algorithm with a collision-free, deadlock-free guarantee. 

The algorithm was tested by the researchers in a simulation of 1,024 robots, as well as a swarm of 100 real robots in the lab. Within a minute, the robots were able to reliably, safely, and efficiently converge to form a predetermined shape in less than a minute. 

Northwestern’s Michael Rubenstein led the study. He is the Lisa Wissner-Slivka and Benjamin Slivka Professor in Computer Science in Northwestern’s McCormick School of Engineering. 

“If you have many autonomous vehicles on the road, you don’t want them to collide with one another or get stuck in a deadlock,” said Rubenstein. “By understanding how to control our swarm robots to form shapes, we can understand how to control fleets of autonomous vehicles as they interact with each other.”

The paper is set to be published in the journal IEEE Transactions on Robotics later this month. 

There is an advantage in using a swarm of small robots compared to one large robot or a swarm led by one robot; there is a lack of centralized control. Centralized control can become a major reason for failure, and Rubenstein’s decentralized algorithm acts as a fail-safe. 

“If the system is centralized and a robot stops working, then the entire system fails,” Rubenstein said. “In a decentralized system, there is no leader telling all the other robots what to do. Each robot makes its own decisions. If one robot fails in a swarm, the swarm can still accomplish the task.”

In order to avoid collisions and jams, the robots coordinate with each other. The ground beneath the robots acts as a grid for the algorithm, and each robot is aware of its position on the grid due to technology similar to GPS. 

Prior to undertaking movement from one spot to another, each robot relies on sensors to communicate with the others. By doing this, it is able to determine whether or not other spaces on the grid are vacant or occupied. 

“The robots refuse to move to a spot until that spot is free and until they know that no other robots are moving to that same spot,” Rubenstein said. “They are careful and reserve a space ahead of time.”

The robots are able to communicate with each other in order to form a shape, and this is possible due to the near-sightedness of the robots. 

“Each robot can only sense three or four of its closest neighbors,” Rubenstein explained. “They can’t see across the whole swarm, which makes it easier to scale the system. The robots interact locally to make decisions without global information.”

100 robots can coordinate to form a shape within a minute, compared to the hour that it took in some previous approaches. Rubenstein wants his algorithm to be used in both driverless vehicles and automated warehouses. 

“Large companies have warehouses with hundreds of robots doing tasks similar to what our robots do in the lab,” he said. “They need to make sure their robots don’t collide but do move as quickly as possible to reach the spot where they eventually give an object to a human.”

 

Spread the love
Continue Reading