Connect with us

Robotics

New Paper Argues Robots Need to Understand Human Motive

Published

 on

New Paper Argues Robots Need to Understand Human Motive

A new article by the National Centre for Nuclear Robotics, based at the University of Birmingham, argues that robots need to understand motive like us humans. If we want humans and robots to work both effectively and safely together, the robots can’t just perform tasks without knowing the reason they are doing them. 

The lead author of the piece is Dr. Valerio Ortenzi from the University of Birmingham. He says that this is needed as the economy becomes increasingly automated, connected, and digitized. It’s also important as there will be a dramatic increase in interactions between humans and robots in both factories and homes. 

The paper was published in Nature Machine Intelligence. It focuses partly on robots using objects and ‘Grasping’, an action that is easily completed in nature but is challenging among robots. 

Our current factory-based robots blindy pick up objects that they are already familiar with. Those objects are also in predetermined places during selected times. If a machine were to pick up an object that it is unfamiliar with, and in a random place, it would need multiple, complex technologies working together. Some of those technologies are vision systems and advanced AI. Those help the machine see the target and determine its properties. Some even require sensors that are located in the gripper to prevent the robot from crushing the object.

Researchers from the National Centre for Nuclear Robotics say that even with all of the technologies, the machine still doesn’t know the reasons for picking an object up. Because of this, what we once thought of as successive actions performed by robots can actually be real-world failures. 

The paper in the Nature Machine Intelligence uses the example of a robot delivering an object to a customer after picking it up. The robot picks up the object successfully without crushing it. The problem arises when the robot covers up an important barcode. This means that the object is not able to be tracked and there is no information confirming the successful delivery of the object. This complicates things and results in a failure of the delivery system because the robot does not know certain consequences of picking up the object incorrectly. 

Dr. Ortenzi and the co-authors of the paper spoke about other examples as well. 

“Imagine asking a robot to pass you a screwdriver in a workshop. Based on current conventions, the best way for a robot to pick up the tool is by the handle. Unfortunately, that could mean that a hugely powerful machine then thrusts a potentially lethal blade towards you, at speed. Instead, the robot needs to know what the end goal is, i.e., to pass the screwdriver safely to its human colleague, in order to rethink its actions.”

“Another scenario envisages a robot passing a glass of water to a resident in a care home. It must ensure that it doesn’t drop the glass but also that water doesn’t spill over the recipient during the act of passing, or that the glass is presented in such a way that the person can take hold of it.” 

“What is obvious to humans has to be programmed into a machine and this requires a profoundly different approach. The traditional metrics used by researchers, over the past twenty years, to assess robotic manipulation, are not sufficient. In the most practical sense, robots need a new philosophy to get a grip.” 

Professor Rustman Stolkin, director of the National Centre for Nuclear Robotics, spoke about the organization’s role in developing this technology. 

“National Centre for Nuclear Robotics is unique in working on practical problems with industry, while simultaneously generating the highest calibre of cutting-edge academic research — exemplified by this landmark paper.” 

The new research was worked on with the Centre of Excellence for Robotic Vision at Queensland University of Technology, Australia, Scuola Superiore Sant’Anna, Italy, the German Aerospace Center (DLR), Germany, and the University of Pisa, Italy.

 

Spread the love

Deep Learning Specialization on Coursera

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Robotics

Assembler Robots Can Piece Together Large Structures

Published

on

Assembler Robots Can Piece Together Large Structures

There is new work being done with assembler robots at the Massachusetts Institute of Technology. Professor Neil Gershenfeld and graduate student Benjamin Jennett have been working in MIT’s Center for Bits and Atoms (CBA) to create prototype versions of the robots. The small robots are capable of assembling small structures, and they coordinate with each other to piece together the smaller structures into larger pieces. 

These new developments can have big implications for certain industries such as commercial aircrafts. As of right now, commercial aircrafts are often assembled piece by piece. Those pieces are built at different locations, and they eventually meet up at one spot to be assembled. With this new technology, the entire commercial aircraft can be assembled in the same place by the small robots. 

The new developments were published in the October issue of the IEEE Robotics and Automation Letters. The paper was compiled by Professor Gershenfeld, Jennett, and graduate student Amira Abdel-Rahman. It was also worked on by CBA alumnus Kenneth Cheung, who now works in the Amers Research Center at NASA. It is there where Cheung runs the ARMADAS project, which focuses on designing a lunar base that can be built through robotic assembly. 

“What’s at the heart of this is a new kind of robotics, that we call relative robots,” Gershenfeld says.

Two Categories of Robots

According to Gershenfeld, there are two broad categories of robotics, The first are made out of expensive, custom components that are specifically optimized for applications like factory assembly. The second are inexpensive, mass-produced ones with lower performance. 

The new assembler robots are neither; they are simpler, more capable, and they can change everything we know about production for items such as airplanes, bridges, and buildings. 

The big difference with these assembler robots is that they have a different system for how the robotic device interacts with the materials that it is handling. 

“You can’t separate the robot from the structure — they work together as a system,” Gershenfeld says. 

Instead of keeping track of their position through the use of navigation systems, the assembler robots use the small subunits, or voxels. With each step onto the next voxel, the assembler robot can readjust its sense of position. 

The team wants to have any physical object capable of being recreated as an array of smaller voxels, consisting of simple struts and nodes. The simple components can then distribute different loads based on their arrangement, and the overall weight of the object will be lighter since the voxels will mostly consist of empty space. Each voxel will have a latching system built into it so that they can all stay together. 

Simplifying Complex Robotic Systems

As the voxel assembles pieces, it is able to count its steps over the structure. This, along with the navigation technique, simplifies the current complex robotic systems. 

“It’s missing most of the usual control systems, but as long as it doesn’t miss a step, it knows where it is,” Gershenfeld says. 

Abdel-Rahman developed control software that helps make the process faster by bringing together swarms of units, which help the robots coordinate and work together. 

Big Interest By Big Names

There is already a lot of interest in the technology by big names such as NASA and the European aerospace company Airbus SE. 

One of the advantages of the assembler robots is that they allow the repairs and maintenance of a structure to follow the same robotic process as the initial assembly. Damaged parts of the structure can be replaced and fixed, and this allows it to stay in the same place instead of being split up. 

According to Gershenfeld, “Unbuilding is as important as building.” 

“For a space station or a lunar habitat, these robots would live on the structure, continuously maintaining and repairing it,” says Jenett.

These new developments will have huge implications for almost every structure and its construction process, including entire buildings. According to the team, It can even be used in difficult environments like space, the moon, or Mars. Instead of taking huge structures and sending them into space, large numbers of smaller pieces could be sent and then assembled by the robots. Even better, the natural resources could be used in whatever place the subunits go. 

Enormous Potential and Problems

While acknowledging the enormous potential of this technology and how it will change our society, it is also worth noting that it will have huge implications for the economy as well. With the use of robotics and artificial intelligence, the need for humans to build, create, and develop is becoming less important. If we do not proceed with caution, these new technologies will come with enormous problems.

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Robotics

Team Develops First Ever Autonomous Humanoid Robot With Full-Body Artificial Skin 

Published

on

Team Develops First Ever Autonomous Humanoid Robot With Full-Body Artificial Skin 

A team from The Technical University of Munich (TUM) has developed the first ever autonomous humanoid robot with full-body artificial skin. They were able to create a system that paired artificial skin with control algorithms. This new technology will help robots become capable of sensing their own bodies and environment, which will be important when they inevitably start to be commonplace among humans. 

If a robot is able to better navigate its environment through the use of sensing, it will be much safer around humans. One of the things they will be able to do is avoid unwanted contact and accidents.

The team responsible for the new technology included Prof. Gordon Cheng. The skin that was developed is made up of hexagonal cells that are about one inch in diameter. Each one of the hexagonal cells consists of a microprocessor and sensors, which help detect contact, acceleration, proximity, and temperature. 

The actual skin cells are not new; they were developed 10 years ago by a Professor of Cognitive Systems at TUM, Gordon Cheng. These new developments by the team at TUM unlocked the full potential. 

The research was published in the journal Proceedings of the IEEE. 

The Problem of Computing Capacity

One of the major problems with the development of artificial skin is computing capacity. Because the human skin has about 5 million receptors, it has been a challenge to recreate it in robots. The constant processing of data through the use of sensors can overload systems. 

The team at TUM decided not to monitor the skin constantly. Instead, they focused on events in order to reduce the need for massive processing effort by as much as 90%. In the newly developed artificial skin, individual cells transmit information only when there is a change in values. This means there is heavy reliance on the sensors to detect some type of sensation, which will in turn initiate the process. 

Critical For Human-Robot Interaction

This new technique by Prof. Cheng and his team helps increase the safety of the machines. They are now the first to apply artificial skin to a human-size autonomous robot that is not dependent on external computation. 

The robot that they used for the artificial skin is called the H-1 robot, and it has 1,260 cells and more than 13,000 sensors. The sensors and cells are located on the upper body, arms, legs, and the soles of the feet. Because of this, the robot can sense its entire body, from top to bottom. The H-1 can move along uneven surfaces and balance on one leg. 

The H-1 robot is capable of safely hugging a human, which is a great accomplishment. These machines have such power that they can be extremely dangerous and injure humans when closely interacting. The H-1 is able to sense multiple parts of its body at once so that it does not exert too much force or pressure. 

“This might not be as important in industrial applications, but in areas such as nursing care, robots must be designed for very close contact with people,” Gordon Cheng explained.

The new technology is very versatile, and it can still function even if some of the cells are lost. 

“Our system is designed to work trouble-free and quickly with all kinds of robots,” says Gordon Cheng. “Now we’re working to create smaller skin cells with the potential to be produced in larger numbers.”

There are constant developments in the AI field that are bringing humans and robots closer together, and new technology like this is critical in facilitating a safe environment where both can operate. 

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Robotics

Engineers Create a Robot That Can Move Like an Inchworm

Published

on

Engineers Create a Robot That Can Move Like an Inchworm

Engineering researchers from the University of Toronto have developed a tiny robot that can move similar to an inchworm. This newly developed technology can impact various industries including aviation and smart technology. 

The research was published in Scientific Reports. 

The group of engineer researchers includes Professor Hani Naguib. The team focuses on smart materials, especially electrothermal actuators (ETAs). ETAs are devices that are made of certain polymers that are able to be programmed so that they physically respond to electrical or thermal changes. They can be programmed so that they mimic muscle reflexes, and they can react physically to temperature by tightening up in the cold and relaxing when warm. 

Professor Naguib and the team of engineers are using this new technology in robotics, and they are developing soft robots that are able to crawl and curl like an inchworm. Another area where they will be important is in the manufacturing industry. The soft robots could replace certain metal-plated bots that exist now. 

“Right now, the robots you’ll find in industry are heavy, solid and caged off from workers on the factory floor, because they pose safety hazards,” explains Naguib.

“But the manufacturing industry is modernizing to meet demand. More and more, there’s an emphasis on incorporating human-robot interactions,” he says. “Soft, adaptable robots can leverage that collaboration.”

The study of responsive material has been around for a long time, but the group of engineers discovered a new way of programming them to come up with the inch-worm robotic movements. 

According to PhD student and the paper’s lead author, Yu-Chen (Gary) Sun, “Existing research documents the programming of ETAs from a flat resting state. The shape-programmability of a two-dimensional structure is limited, so the response is just a bending motion.”

The team used a thermal-induced, stress-relaxation and curing method in order to create an ETA that has a three-dimensional resting state. This brings an entire new set of possible shapes and movements. 

“What’s also novel is the power required to induce the inchworm motion. Ours is more efficient than anything that has existed in research literature so far,” says Sun.

According to Professor Naguib, this new field of robotics can completely revolutionize many industries including security, aviation, surgery, and wearable electronics. 

In situations where humans could be in danger — a gas leak or a fire — we could outfit a crawling robot with a sensor to measure the harmful environment,” explains Naguib. “In aerospace, we could see smart materials being the key to next-generation aircrafts with wings that morph.”

The first applications will likely be within the wearable technology field. 

“We’re working to apply this material to garments. These garments would compress or release based on body temperature, which could be therapeutic to athletes,” says Naguib. The team is also studying whether smart garments could be beneficial for spinal cord injuries.

The team of researchers will now look towards making the responsive crawling motion faster, and they will focus on new configurations. 

“In this case, we’ve trained it to move like a worm,” he says. “But our innovative approach means we could train robots to mimic many movements — like the wings of a butterfly.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading