Connect with us

Robotics

Engineers Create a Robot That Can Move Like an Inchworm

Published

 on

Engineers Create a Robot That Can Move Like an Inchworm

Engineering researchers from the University of Toronto have developed a tiny robot that can move similar to an inchworm. This newly developed technology can impact various industries including aviation and smart technology. 

The research was published in Scientific Reports. 

The group of engineer researchers includes Professor Hani Naguib. The team focuses on smart materials, especially electrothermal actuators (ETAs). ETAs are devices that are made of certain polymers that are able to be programmed so that they physically respond to electrical or thermal changes. They can be programmed so that they mimic muscle reflexes, and they can react physically to temperature by tightening up in the cold and relaxing when warm. 

Professor Naguib and the team of engineers are using this new technology in robotics, and they are developing soft robots that are able to crawl and curl like an inchworm. Another area where they will be important is in the manufacturing industry. The soft robots could replace certain metal-plated bots that exist now. 

“Right now, the robots you’ll find in industry are heavy, solid and caged off from workers on the factory floor, because they pose safety hazards,” explains Naguib.

“But the manufacturing industry is modernizing to meet demand. More and more, there’s an emphasis on incorporating human-robot interactions,” he says. “Soft, adaptable robots can leverage that collaboration.”

The study of responsive material has been around for a long time, but the group of engineers discovered a new way of programming them to come up with the inch-worm robotic movements. 

According to PhD student and the paper’s lead author, Yu-Chen (Gary) Sun, “Existing research documents the programming of ETAs from a flat resting state. The shape-programmability of a two-dimensional structure is limited, so the response is just a bending motion.”

The team used a thermal-induced, stress-relaxation and curing method in order to create an ETA that has a three-dimensional resting state. This brings an entire new set of possible shapes and movements. 

“What’s also novel is the power required to induce the inchworm motion. Ours is more efficient than anything that has existed in research literature so far,” says Sun.

According to Professor Naguib, this new field of robotics can completely revolutionize many industries including security, aviation, surgery, and wearable electronics. 

In situations where humans could be in danger — a gas leak or a fire — we could outfit a crawling robot with a sensor to measure the harmful environment,” explains Naguib. “In aerospace, we could see smart materials being the key to next-generation aircrafts with wings that morph.”

The first applications will likely be within the wearable technology field. 

“We’re working to apply this material to garments. These garments would compress or release based on body temperature, which could be therapeutic to athletes,” says Naguib. The team is also studying whether smart garments could be beneficial for spinal cord injuries.

The team of researchers will now look towards making the responsive crawling motion faster, and they will focus on new configurations. 

“In this case, we’ve trained it to move like a worm,” he says. “But our innovative approach means we could train robots to mimic many movements — like the wings of a butterfly.”

 

Spread the love

Deep Learning Specialization on Coursera

Robotics

Assembler Robots Can Piece Together Large Structures

Published

on

Assembler Robots Can Piece Together Large Structures

There is new work being done with assembler robots at the Massachusetts Institute of Technology. Professor Neil Gershenfeld and graduate student Benjamin Jennett have been working in MIT’s Center for Bits and Atoms (CBA) to create prototype versions of the robots. The small robots are capable of assembling small structures, and they coordinate with each other to piece together the smaller structures into larger pieces. 

These new developments can have big implications for certain industries such as commercial aircrafts. As of right now, commercial aircrafts are often assembled piece by piece. Those pieces are built at different locations, and they eventually meet up at one spot to be assembled. With this new technology, the entire commercial aircraft can be assembled in the same place by the small robots. 

The new developments were published in the October issue of the IEEE Robotics and Automation Letters. The paper was compiled by Professor Gershenfeld, Jennett, and graduate student Amira Abdel-Rahman. It was also worked on by CBA alumnus Kenneth Cheung, who now works in the Amers Research Center at NASA. It is there where Cheung runs the ARMADAS project, which focuses on designing a lunar base that can be built through robotic assembly. 

“What’s at the heart of this is a new kind of robotics, that we call relative robots,” Gershenfeld says.

Two Categories of Robots

According to Gershenfeld, there are two broad categories of robotics, The first are made out of expensive, custom components that are specifically optimized for applications like factory assembly. The second are inexpensive, mass-produced ones with lower performance. 

The new assembler robots are neither; they are simpler, more capable, and they can change everything we know about production for items such as airplanes, bridges, and buildings. 

The big difference with these assembler robots is that they have a different system for how the robotic device interacts with the materials that it is handling. 

“You can’t separate the robot from the structure — they work together as a system,” Gershenfeld says. 

Instead of keeping track of their position through the use of navigation systems, the assembler robots use the small subunits, or voxels. With each step onto the next voxel, the assembler robot can readjust its sense of position. 

The team wants to have any physical object capable of being recreated as an array of smaller voxels, consisting of simple struts and nodes. The simple components can then distribute different loads based on their arrangement, and the overall weight of the object will be lighter since the voxels will mostly consist of empty space. Each voxel will have a latching system built into it so that they can all stay together. 

Simplifying Complex Robotic Systems

As the voxel assembles pieces, it is able to count its steps over the structure. This, along with the navigation technique, simplifies the current complex robotic systems. 

“It’s missing most of the usual control systems, but as long as it doesn’t miss a step, it knows where it is,” Gershenfeld says. 

Abdel-Rahman developed control software that helps make the process faster by bringing together swarms of units, which help the robots coordinate and work together. 

Big Interest By Big Names

There is already a lot of interest in the technology by big names such as NASA and the European aerospace company Airbus SE. 

One of the advantages of the assembler robots is that they allow the repairs and maintenance of a structure to follow the same robotic process as the initial assembly. Damaged parts of the structure can be replaced and fixed, and this allows it to stay in the same place instead of being split up. 

According to Gershenfeld, “Unbuilding is as important as building.” 

“For a space station or a lunar habitat, these robots would live on the structure, continuously maintaining and repairing it,” says Jenett.

These new developments will have huge implications for almost every structure and its construction process, including entire buildings. According to the team, It can even be used in difficult environments like space, the moon, or Mars. Instead of taking huge structures and sending them into space, large numbers of smaller pieces could be sent and then assembled by the robots. Even better, the natural resources could be used in whatever place the subunits go. 

Enormous Potential and Problems

While acknowledging the enormous potential of this technology and how it will change our society, it is also worth noting that it will have huge implications for the economy as well. With the use of robotics and artificial intelligence, the need for humans to build, create, and develop is becoming less important. If we do not proceed with caution, these new technologies will come with enormous problems.

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Robotics

Team Develops First Ever Autonomous Humanoid Robot With Full-Body Artificial Skin 

Published

on

Team Develops First Ever Autonomous Humanoid Robot With Full-Body Artificial Skin 

A team from The Technical University of Munich (TUM) has developed the first ever autonomous humanoid robot with full-body artificial skin. They were able to create a system that paired artificial skin with control algorithms. This new technology will help robots become capable of sensing their own bodies and environment, which will be important when they inevitably start to be commonplace among humans. 

If a robot is able to better navigate its environment through the use of sensing, it will be much safer around humans. One of the things they will be able to do is avoid unwanted contact and accidents.

The team responsible for the new technology included Prof. Gordon Cheng. The skin that was developed is made up of hexagonal cells that are about one inch in diameter. Each one of the hexagonal cells consists of a microprocessor and sensors, which help detect contact, acceleration, proximity, and temperature. 

The actual skin cells are not new; they were developed 10 years ago by a Professor of Cognitive Systems at TUM, Gordon Cheng. These new developments by the team at TUM unlocked the full potential. 

The research was published in the journal Proceedings of the IEEE. 

The Problem of Computing Capacity

One of the major problems with the development of artificial skin is computing capacity. Because the human skin has about 5 million receptors, it has been a challenge to recreate it in robots. The constant processing of data through the use of sensors can overload systems. 

The team at TUM decided not to monitor the skin constantly. Instead, they focused on events in order to reduce the need for massive processing effort by as much as 90%. In the newly developed artificial skin, individual cells transmit information only when there is a change in values. This means there is heavy reliance on the sensors to detect some type of sensation, which will in turn initiate the process. 

Critical For Human-Robot Interaction

This new technique by Prof. Cheng and his team helps increase the safety of the machines. They are now the first to apply artificial skin to a human-size autonomous robot that is not dependent on external computation. 

The robot that they used for the artificial skin is called the H-1 robot, and it has 1,260 cells and more than 13,000 sensors. The sensors and cells are located on the upper body, arms, legs, and the soles of the feet. Because of this, the robot can sense its entire body, from top to bottom. The H-1 can move along uneven surfaces and balance on one leg. 

The H-1 robot is capable of safely hugging a human, which is a great accomplishment. These machines have such power that they can be extremely dangerous and injure humans when closely interacting. The H-1 is able to sense multiple parts of its body at once so that it does not exert too much force or pressure. 

“This might not be as important in industrial applications, but in areas such as nursing care, robots must be designed for very close contact with people,” Gordon Cheng explained.

The new technology is very versatile, and it can still function even if some of the cells are lost. 

“Our system is designed to work trouble-free and quickly with all kinds of robots,” says Gordon Cheng. “Now we’re working to create smaller skin cells with the potential to be produced in larger numbers.”

There are constant developments in the AI field that are bringing humans and robots closer together, and new technology like this is critical in facilitating a safe environment where both can operate. 

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Robotics

New Research Sheds Light on Human-Robot Trust

Published

on

New Research Sheds Light on Human-Robot Trust

New research, led by the U.S. Army Research Laboratory along with the University of Central Florida Institute for Simulations and Training, is shedding light on the level of trust humans have for robots. The new project focused on the relationship between humans and robots, and whether humans give more value to a robot’s reasoning or its mistakes. 

The new research looked into human-agent teaming, or HAT, and how human trust, workload, and perceptions of an agent are influenced by the transparency of those agents such as robots, unmanned vehicles, and software agents. Agent transparency is when a human is able to identify the intent, reasoning process, and future plans of agents. 

The new research suggests that human confidence in robots decreases whenever the robot makes a mistake. This is regardless of whether or not the robot has been transparent with its reasoning process. 

The new research was published in the August edition of IEEE-Transactions on Human-Machine Systems. The paper was titled “Agent Transparency and Reliability in Human-Robot Interaction: The Influence on User Confidence and Perceived Reliability.” 

Traditional research dealing with human-agent teaming uses completely reliable intelligent agents that make no mistakes. However, this new study was one of the few that explored how agent transparency interacts with agent reliability. The study involved a robot that made mistakes while humans were watching, and the humans were then asked if they viewed the robot as less reliable. During the entire process, the humans were given insight into the robot’s reasoning process. 

Dr. Julia Wright is the principal investigator for the project, and she is a researcher at U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, or ARL. 

“Understanding how the robot’s behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members,” she said “This research contributes to the Army’s Multi-Domain Operations efforts to ensure overmatch in artificial intelligence-enabled capabilities. But it is also interdisciplinary, as its findings will inform the work of psychologists, roboticists, engineers, and system designers who are working toward facilitating better understanding between humans and autonomous agents in the effort to make autonomous teammates rather than simply tools.”

This new project was part of a larger one known as the Autonomous Squad Member (ASM) project that is sponsored by the Office of Secretary of Defense’s Autonomy Research Pilot Initiative. The ASM is an actual small ground robot that is used within an infantry squad. It is able to communicate and interact with the squad. 

The study involved participants observing human-agent soldier teams in a simulated environment. The ASM was part of the team, and it moved through a training course. The task for the observers was to monitor the team and evaluate the robot. Throughout the training course, the team was presented with various different events and obstacles. The soldiers were able to navigate each one correctly, but there were times when the robot could not understand the obstacle and made mistakes. The robot then sometimes shared its reasoning behind certain actions as well as the expected outcome. 

The study found that the participants were more concerned with the robot’s mistakes compared to the underlying logic and reasoning behind them. The robot’s reliability played a major role in the participant’s trust and perceptions. Whenever the robot made a mistake, the observers rated it’s reliability lower. 

The reliability and trust increased whenever the agent transparency was increased, or when the robot shared details and reasoning behind its decision. However, the reliability and trust was still lower than robots that never suffered an error. This suggested that the sharing of reasoning and underlying logic could help with some of the trust and reliability issues surrounding robots. 

“Earlier studies suggest that context matters in determining the usefulness of transparency information,” Wright said. “We need to better understand which tasks require more in-depth understanding of the agent’s reasoning, and how to discern what that depth would entail. Future research should explore ways to deliver transparency information based on the tasking requirements.”

This new research will play a critical role in the field because of the increasing interaction that is taking place between humans and robots. One of the areas which will be the most important is the military. As seen in these exercises, robots and soldiers are eventually going to be side by side. Just as a soldier has to have trust in another soldier, the same will apply to robots. If that is able to be achieved and robots became commonplace in infantry squads, it will be another instance of artificial intelligence penetrating the defense industry.

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading