Connect with us

Robotics

Researchers Develop Swarm of Tiny Drones to Explore Unknown Environments

Published

 on

Researchers Develop Swarm of Tiny Drones to Explore Unknown Environments

Researchers from Delft University of Technology have developed a swarm of tiny drones capable of autonomously exploring unknown environments. The research was presented on October 23 in Science Robotics. The new work is important for the ongoing development of swarm robotics. 

One of the most challenging parts of developing these tiny robots was that in order for them to move autonomously, something needed to be done about their limited sensing and computational capabilities. The team, consisting of researchers from TU Delft, University of Liverpool, and Radbound University of Nijmegen, looked to insect navigation as a model. 

Enormous Potential

As mentioned many times on this site, swarm robotics is a growing field that can lead to many possibilities. The real-life insect swarms have been used as a model for tiny robots. An individual robot may be limited in its abilities, but grouping many together can provide new capabilities. The small robots are often less expensive, and they can complete tasks that many larger ones cannot. With the use of small robotic drones acting in a swarm, disaster sites could be explored and understood much faster. This technology is not out yet, but researchers are constantly working on it because of the great possibilities. 

The joint research team of TU Delft, University of Liverpool, and Radbound University of Nijmegen are financed by the Dutch national science foundation NWO Natural Artificial Intelligence programme. 

One of the major areas where this technology can be used is within search-and-rescue missions. The research team developed the swarms of drones with the idea of utilizing them in such missions. If the team accomplishes what they want, rescue workers would be able to use swarms of tiny drones to explore a disaster site and report back. For example, a building on the brink of collapsing would be explored by the drones, and they would then report back with the locations of people inside. 

The swarms of tiny drones could also be equipped with cameras in order to find victims. The research team tested this by sending the drones into an indoor office environment that contained two dummy victims. The experiment was a success, and a swarm of 6 drones explored 80% of the open rooms within 6 minutes. This task is not possible with just one drone. 

Another advantage of having multiple small drones is that if one malfunctions and fails to bring back an image, there are several others with the same information. This was shown in the test when one drone found a victim but lost the image, and another came back with it. 

Biggest Challenges

Kimberly McGuire is a PhD student who worked on the project. 

“The biggest challenge in achieving swarm exploration lies at the level of the individual intelligence of the drones,” says McGuire. “In the beginning of the project, we focused on achieving basic flight capabilities such as controlling the velocity and avoiding obstacles. After that, we designed a method for the small drones to detect and avoid each other. We solved this by having each drone carry a wireless communication chip and then making use of the signal strength between these chips — this is like the number of bars shown on your phone that decrease when you move away from your WiFi router in your home. The main advantages of this method are that it does not require extra hardware on the drone and that it requires very few computations.”

The hardest part of developing these tiny swarm robots is autonomous navigation. It is extremely difficult to get a group of small robots to navigate a completely unknown environment. This is the main reason the researchers turned to insects as a model; they often navigate environments without any previous knowledge of them. 

“The main idea underlying the new navigation method is to reduce our navigation expectations to the extreme: we only require the robots to be able to navigate back to the base station,” says Guido de Croon, principal investigator of the project. “The swarm of robots first spreads out into the environment by having each robot follow a different preferred direction. After exploring, the robots return to a wireless beacon located at the base station.”

This new development is just one of many that are coming out of robotics. Swarm robotics is an important field that opens up many new possibilities.

 

Spread the love

Robotics

Researchers Improve Robotic Arm Used in Surgery

Published

on

Researchers Improve Robotic Arm Used in Surgery

Robotic surgery is continuing to become more advanced and precise, especially with recent developments from scientists at Tokyo Institute of Technology. They have created a new type of controller for the robotic arm that is used in surgery. The controller aims to make the job of the surgeon easier while providing excellent precision, and it does this by combining the two major types of gripping that are used in commercially available robotic systems. 

The past ten years have seen major advancements within robot-assisted surgery, and the technology is present in nearly all subspecialties. The robot systems that are used in robot-assisted surgery often include a controller device that is manipulated by the surgeon to control a robotic arm. The dexterity and precision of surgeons are improved through these systems. They scale the hand motions into smaller movements and can filter out hand tremors. Common surgical complications such as surgical site infection are reduced by the systems as well. 

Robot-assisted surgery does have disadvantages, and certain problems arise for those individuals who perform the surgery. Often times, robotic surgeons suffer from physical discomfort during procedures and finger fatigue sets in. These problems are due to the way in which the controller is gripped. The two major types of grips that are often used to control surgical robots are the “pinch grip” and “power grip.” The pinch grip, which has been used in conventional surgeries for centuries, is when the thumb, middle, and index fingers are used to complete high-precision movements. The power grip is when a handle is grabbed with the entire hand, and it is often used for large movements.

The pinch grip often causes fatigue due to the tension it puts on certain hand and finger muscles, and the power grip is less precise. Because of this, neither one is the perfect option. 

The newly published study in The International Journal of Medical Robotics and Computer Assisted Surgery, put forward by Mr. Solmon Jeong and Dr. Kotaro Tadano from Tokyo Institute of Technology (Tokyo Tech), highlights a new solution. 

The researchers developed a new controller that combines those two different types of gripping. According to Dr. Tadano, “In robotic surgery, the limitations of the two conventional gripping methods are strongly related to the advantages and disadvantages of each gripping type. Thus, we wanted to investigate whether a combined gripping method can improve the manipulation performance during robotic surgery, as this can leverage the advantages of both gripping types while compensating for their disadvantages.”

The researchers received promising results from the proof-of-concept experiment. They proceeded to design a robotic surgery system with a modular controller capable of being adjusted to three different types of grips: pinch, power, or combined gripping. The results showed that the combined gripping method performed better in many ways, including the number of failures, the time required, and overall length of the movements performed to reach the targets. The combined gripping method was also easier and more comfortable to use, according to many involved in the experiment. 

“The manipulating method of master controllers for robotic surgery has a significant influence in terms of intuitiveness, comfort, precision, and stability. In addition to enabling precise operation, a comfortable manipulating method could potentially benefit both the patient and the surgeon,” said Dr. Tadano.

The new developments will be critical in advancing robotic surgery, and they will further close the gap between human and robot within the industry.

 

Spread the love
Continue Reading

Robotics

Soft Robot Sweats to Regulate Temperature

Published

on

Soft Robot Sweats to Regulate Temperature

Researchers at Cornell University have developed a soft robotic muscle that is capable of regulating its temperature through sweating. The new development is one of many which are transforming the soft robotics field.

The thermal management technique is a fundamental part of creating untethered, high-powered robots that are able to operate for long periods of time without overheating. 

The project was led by Rob Shepherd, an associate professor of mechanical and aerospace engineering at Cornell. 

The team’s paper titled “Automatic Perspiration in 3D Printed Hydrogel Actuators” was published in Science Robotics.

One of the most difficult aspects of developing enduring, adaptable and agile robots is managing the robots’ internal temperature. According to Shepherd, the robot will malfunction or stop completely if the high-torque density motors and exothermic engines responsible for powering a robot overheat.

This problem is especially present in soft robots since they are made of synthetic material. Soft robots are more flexible, but this increased flexibility causes them to hold heat. This is not the case for metals, which dissipate heat much faster. The problem with an internal cooling technology, such as a fan, is that it would take up too much space inside the robot and increase the weight. 

With these challenges in mind, Shepherd’s team looked towards mammals and their natural ability to sweat as inspiration for a cooling system.

“The ability to perspire is one of the most remarkable features of humans,” said co-lead author T.J. Wallin, a research scientist at Facebook Reality Labs. “Sweating takes advantage of evaporated water loss to rapidly dissipate heat and can cool below the ambient environmental temperature. … So as is often the case, biology provided an excellent guide for us as engineers.”

Shepherd’s team partnered with the lab of Cornell engineering professor Emmanual Giannelis. Together, they created nanopolymer materials needed for sweating. They developed these using a 3D-printing technique called multi-material stereolithography, which relies on light to cure resin into pre-designed shapes. 

The researchers then fabricated fingerlike actuators that were composed of two hydrogel materials able to retain water and respond to temperature. Another way of looking at it is that these were “smart” sponges. The base layer consists of poly-N-isopropylacrylamide, which reacts to temperatures above 30°C (86°F) by shrinking. This reaction squeezes water up into a top layer of polyacrylamide that is perforated with micron-sized pores. The pores react to the same temperature range, and they release the “sweat” by automatically dilating before closing when the temperature drops below 30°C.

When the water evaporates, the actuator’s surface temperature is reduced by 21°C within 30 seconds. This cooling process is three times more efficient than the one in humans, according to the researchers. When exposed to wind from a fan, the actuators can cool off about six times faster.

One of the issues with the technology is that it can affect a robot’s mobility. The robots are also required to replenish their water supply. Because of this, Shepherd envisions soft robots that eventually will both perspire and drink like mammals. 

The new development of this technology follows a very apparent pattern within the robotics industry. Technology is increasingly being developed based on our natural environment. Whether it’s the cooling process of sweating present in mammals, neural networks based on moon jellyfish, or artificial skin, robotics is a field that in many ways builds on what we already have in nature.

 

Spread the love
Continue Reading

Robotics

Facebook Creates Method May Allow AI Robots To Navigate Without Map

mm

Published

on

Facebook Creates Method May Allow AI Robots To Navigate Without Map

Facebook has recently created an algorithm that enhances an AI agent’s ability to navigate an environment, letting the agent determine the shortest route through new environments without access to a map. While mobile robots typically have a map programmed into them, the new algorithm that Facebook designed could enable the creation of robots that can navigate environments without the need for maps.

According to a post created by Facebook researchers, a major challenge for robot navigation is endowing AI systems with the ability to navigate through novel environments and reaching programmed destinations without a map. In order to tackle this challenge, Facebook created a reinforcement learning algorithm distributed across multiple learners. The algorithm was called decentralized distributed proximal policy optimization (DD-PPO). DD-PPO was given only compass data, GPS data, and access to an RGB-D camera, but was reportedly able to navigate a virtual environment and get to a goal without any map data.

According to the researchers, the agents were trained in virtual environments like office buildings and houses. The resulting algorithm was capable of navigating a simulated indoor environment, choosing the correct fork in a path, and quickly recovering from errors if it chose the wrong path. The virtual environment results were promising, and it’s important that the agents are able to reliably navigate these common environments, as in the real world an agent could damage itself or its surroundings if it fails.

The Facebook research team explained that the focus of their project was assistive robots, as proper, reliable navigation for assistive robots and AI agents is essential. The research team explained that navigation is essential for a wide variety of assistive AI systems, from robots that perform tasks around the house to AI-driven devices that help people with visual impairments. The research team also argued that AI creators should move away from map usage in general, as maps are often outdated as soon as they are drawn, and in the real world environments, they are constantly changing and evolving.

As TechExplore reported, the Facebook research team made use of the open-source AI Habitat platform, which enabled them to train embodied agents in photorealistic 3-D environments in a timely fashion. Haven provided access to a set of simulated environments, and these environments are realistic enough that the data generated by the AI model can be applied ot real-world cases. Douglas Heaven in MIT Technology Review explained the intensity of the model’s training:

“Facebook trained bots for three days inside AI Habitat, a photorealistic virtual mock-up of the interior of a building, with rooms and corridors and furniture. In that time they took 2.5 billion steps—the equivalent of 80 years of human experience.”

Due to the sheer complexity of the training task, the researchers reportedly culled the weak learners as the training continued in order to speed up training time. The research team hopes to take their current model further and go on to create algorithms that can navigate complex environments using only camera data, dropping the GPS data and compass. The reason for this is that GPS data and compass data can often be thrown off indoors, be too noisy, or just be unavailable.

While the technology has yet to be tested outdoors and has trouble navigating over long-distances, the development of the algorithm is an important step in the development of the next generation of robots, especially delivery drones, and robots that operate in offices or homes.

Spread the love
Continue Reading