Connect with us

Robotics

Scientists Have Developed Psychosensory Electronic Skin Technology

Published

 on

Scientists Have Developed Psychosensory Electronic Skin Technology

A group of scientists led by Professor Jae Eun Jang in the Department of Information and Communication Engineering at DGIST (Daegu Gyeongbuk Institute of Science and Technology) have developed a new type of psychosensory electronic skin technology. It is able to detect “prick” and “hot” pain sensations like humans. This research could be used in the development of humanoid robots, and it can greatly improve the field of prosthetics, especially for patients using prosthetic hands. 

These developments follow a trend of humans trying to recreate the five senses on certain platforms. This has helped lead to the development of devices such as cameras and TVs that had a huge impact on society. Now, scientists are still looking to imitate tactile, olfactory, and palate senses. The next mimetic technology will likely be tactile sensing, and the current tactile researchers are focusing on physical mimetic technologies. These can measure the pressure used for a robot to grab an object. In psychosensory tactile research, the type conducted by the team, researchers focus on how to mimic human tactile feeling such as soft, smooth, or rough. Psychosensory tactile research is still in its infancy, but big developments are being made.

The tactile sensor that was developed by Professor Jae Eun Jang and the team can detect pain and temperature similar to a human. The research was conducted together with Professor Cheil Moon’s team in the Department of Brain and Cognitive Science, Professor Ji-woong Choi’s team in the Department of Information and Communication Engineering, and Professor Hongsoo Choi’s team in the Department of Robotics Engineering. 

These teams were able to develop the technology to have a more simplified sensor structure, and it can measure pressure and temperature at the same time. It is able to be used on a variety of different tactile systems no matter the measurement principle of the sensor. 

The teams of researchers heavily focused on zinc oxide nano-wire (ZnO Nano-wire) technology. That technology was applied as a self-power tactile sensor. Because it utilizes piezoelectric, it does not require a battery to work. That is because the piezoelectric effect generates electrical signals by detecting pressure. 

The team also used a temperature sensor that was applied at the same time so that one sensor could do two different tasks. The team arranged electrodes on polyimide flexible substrate, and then they grew the ZnO nano-wire. They then were able to measure the piezoelectric effect by pressure, and measure the temperature change simultaneously. The researchers were also able to develop a signal processing technique that uses the pressure level, stimulated area, and temperature to determine and judge the generation of pain signals. 

Professor Jang in the Department of Information and Communication Engineering spoke about the new technology. 

“We have developed a core base technology that can effectively detect pain, which is necessary for developing future-type tactile sensor. As an achievement of convergence research by experts in nano engineering, electronic engineering, robotics engineering, and brain sciences, it will be widely applied on electronic skin that feels various senses as well as new human-machine interactions. If robots can also feel pain, our research will expand further into technology to control robots’ aggressive tendency, which is one of the risk factors of AI development.” 

While this technology is still in the beginning stages of development, it will be able to be used in future AI and humanoid development. It holds the possibilities of greatly improving current technology in the areas of prosthetics and electronic skin. That technology is getting closer and closer to being very similar to the human parts they try to model. 

 

Spread the love

Robotics

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

mm

Published

on

Industrial Robotics Company ABB Joins Up With AI Startup Covariant

The AI startup Covariant and the industrial robotics company ABB will be partnering to engineer sophisticated robots that can pick up and manipulate a wide variety of objects. These robots will be used in warehouses and other industrial settings.

As Fortune reported, the industrial robotics company ABB is primarily involved in the creation of robotics for car manufacturers, but the company wants to branch out to other sectors. ABB is aiming to become involved in logistics, where its robots will be used in large warehouses, such as those run by Amazon, to manipulate items, package goods, and make shipments.

According to ABB president Sami Atiya, according to Fortune, ABB sought partners that were experienced in the creation of sophisticated computer vision applications. While the company currently uses computer vision algorithms to operate some of its robots, ABB aimed to push the envelope and create reliable, high-dexterity robots capable of maneuvering and manipulating thousands of different objects.

The company examined many different companies before settling on Covariant as its partner. Covariant is a robotics research company whose researchers come from places like OpenAI and the University of California Berekely. Covariant managed to produce the only software examined by ABB that could reliably recognize many different items without the intervention of human operators.

The computer vision and robotics applications developed by Covariant were trained with reinforcement learning. Thanks to deep neural networks and reinforcement learning, Covariant was able to create software that learns through experience and is able to reliably and consistently recognize objects once a pattern has been learned. The CEO of Covariant, Peter Chen, was interviewed by Fortune. Chen explained that as more robotics companies like ABB brain out into new industries and markets, the goal becomes the creation of robots capable of a wider variety of tasks than those currently used in many manufacturing and logistics operations. Most of the robots employed in industrial capacities are only capable of doing a handful of very specific things. Chen explained that the goal is to create robots capable of adaptation.

As a result of the partnership with Covariant, ABB will get insight into the technology that drives Covariant’s AI, and this knowledge could help them better integrate AI into the tech that powers their existing robots. Currently, Covariant is a fairly small operation with only a handful of robots in full-time operational status, spread out across industries like the electronics industry, the pharmaceutical industry, and the apparel industry. However, its collaboration with ABB could cause it to see substantial growth.

The partnership between Covariant and ABB highlights the increasing role of AI startups in the robotics field. Other examples of AI startups collaborating with robotics companies includes the Japanese corporation IHI establishing a partnership with the AI startup Osaro. The joint collaboration also concerned the use of robots to grasp and manipulate objects.

While there is currently a lot of focus on robots automating away human jobs, in some industries there simply aren’t enough humans to do those jobs, to begin with. A recent report about the logistics sector estimates that over half of all logistics companies will face staff shortages over the course of the next five years. There will be a particular shortage of warehouse workers over the next half-decade. The report suggests that causes of the labor shortage within the logistics industry are falling unemployment rates, long hours, tedious work, and low wages.

Spread the love
Continue Reading

Robotics

Researchers Bring Sense of Touch to Robotic Finger

Published

on

Researchers Bring Sense of Touch to Robotic Finger

Researchers at Columbia Engineering have brought a sense of touch to a newly developed robotic finger. It is able to localize touch with extremely high precision over large, multicurved surfaces. The new development puts robotics one step closer to reaching human-like status. 

Matei Ciocarlie is an associate professor in the departments of mechanical engineering and computer science. Ciocarlie led the research in collaboration with Electrical Engineering Professor Ioannis (John) Kymissis. 

“There has long been a gap between stand-alone tactile sensors and fully integrated tactile fingers — tactile sensing is still far from ubiquitous in robotic manipulation,” says Ciocarlie. “In this paper, we have demonstrated a multicurved robotic finger with accurate touch localization and normal force detection over complex 3D surfaces.”

The current methods that are used to integrate touch sensors into robot fingers face many challenges. It is difficult to cover multicurved surfaces, there is a high wire count, and difficulty fitting the sensors into small fingertips, which prevents the use in dexterous hands. The Columbia Engineering team got around these challenges by developing a new approach: they used overlapping signals from light emitters and receivers that are embedded in a transparent waveguide layer covering the functional areas of the finger. 

The team was able to obtain a signal data set that changes in response to deformation of the finger due to touch. They did this by measuring light transport between every emitter and receiver. Useful information, such as contact location and applied normal force, was then extracted from the data through the use of data-driven deep learning methods. The team was able to do this without the use of analytical models. 

Through this method, the research team developed a fully integrated, sensorized robot finger that has a low wire count. It was built through the use of accessible manufacturing methods and can be easily integrated into dexterous hands. 

The study was published online in IEEE/ASME Transactions on Mechatronics

The first part of the project was the use of light to sense touch. There is a layer of transparent silicone underneath the “skin” of the finger, and the team shined light into it from more than 30 LEDs. The finger also has over 30 photodiodes that are responsible for measuring how the light bounces around. As soon as the finger comes into contact with something, the skin deforms and light moves around in the transparent layer underneath the skin. The researchers then measure how much light goes from every LED to every diode in order to come up with about 1,000 signals. Each one of those signals contains information about the contact made.

“The human finger provides incredibly rich contact information — more than 400 tiny touch sensors in every square centimeter of skin!” says Ciocarlie. “That was the model that pushed us to try and get as much data as possible from our finger. It was critical to be sure all contacts on all sides of the finger were covered — we essentially built a tactile robot finger with no blind spots.”

The second part of the project was the team designing the data to be processed by machine learning algorithms. The data is extremely complex and cannot be interpreted by humans. However, current machine learning techniques can learn to extract specific information, such as where the finger is being touched, what is touching the finger, and how much force is being applied. 

“Our results show that a deep neural network can extract this information with very high accuracy,” says Kymissis. “Our device is truly a tactile finger designed from the very beginning to be used in conjunction with AI algorithms.”

The team also designed the finger so that it can be used on robotic hands. The finger is able to collect nearly 1,000 signals, but it only requires one 14-wire cable connecting it to the hand. There are also no complex off-board electronics required for it to function. 

The team currently has two dexterous hands that are being integrated with the fingers, and they will look to use the hands to demonstrate dexterous manipulation abilities.

“Dexterous robotic manipulation is needed now in fields such as manufacturing and logistics, and is one of the technologies that, in the longer term, are needed to enable personal robotic assistance in other areas, such as healthcare or service domains,” says Ciocarlie.

 

Spread the love
Continue Reading

Robotics

Swarm Robots Help Self-Driving Cars Avoid Collisions

Published

on

Swarm Robots Help Self-Driving Cars Avoid Collisions

The top priority for companies developing self-driving vehicles is that they can safely navigate and avoid crashing or causing traffic jams. Northwestern University has brought that reality one step closer with the development of the first decentralized algorithm with a collision-free, deadlock-free guarantee. 

The algorithm was tested by the researchers in a simulation of 1,024 robots, as well as a swarm of 100 real robots in the lab. Within a minute, the robots were able to reliably, safely, and efficiently converge to form a predetermined shape in less than a minute. 

Northwestern’s Michael Rubenstein led the study. He is the Lisa Wissner-Slivka and Benjamin Slivka Professor in Computer Science in Northwestern’s McCormick School of Engineering. 

“If you have many autonomous vehicles on the road, you don’t want them to collide with one another or get stuck in a deadlock,” said Rubenstein. “By understanding how to control our swarm robots to form shapes, we can understand how to control fleets of autonomous vehicles as they interact with each other.”

The paper is set to be published in the journal IEEE Transactions on Robotics later this month. 

There is an advantage in using a swarm of small robots compared to one large robot or a swarm led by one robot; there is a lack of centralized control. Centralized control can become a major reason for failure, and Rubenstein’s decentralized algorithm acts as a fail-safe. 

“If the system is centralized and a robot stops working, then the entire system fails,” Rubenstein said. “In a decentralized system, there is no leader telling all the other robots what to do. Each robot makes its own decisions. If one robot fails in a swarm, the swarm can still accomplish the task.”

In order to avoid collisions and jams, the robots coordinate with each other. The ground beneath the robots acts as a grid for the algorithm, and each robot is aware of its position on the grid due to technology similar to GPS. 

Prior to undertaking movement from one spot to another, each robot relies on sensors to communicate with the others. By doing this, it is able to determine whether or not other spaces on the grid are vacant or occupied. 

“The robots refuse to move to a spot until that spot is free and until they know that no other robots are moving to that same spot,” Rubenstein said. “They are careful and reserve a space ahead of time.”

The robots are able to communicate with each other in order to form a shape, and this is possible due to the near-sightedness of the robots. 

“Each robot can only sense three or four of its closest neighbors,” Rubenstein explained. “They can’t see across the whole swarm, which makes it easier to scale the system. The robots interact locally to make decisions without global information.”

100 robots can coordinate to form a shape within a minute, compared to the hour that it took in some previous approaches. Rubenstein wants his algorithm to be used in both driverless vehicles and automated warehouses. 

“Large companies have warehouses with hundreds of robots doing tasks similar to what our robots do in the lab,” he said. “They need to make sure their robots don’t collide but do move as quickly as possible to reach the spot where they eventually give an object to a human.”

 

Spread the love
Continue Reading