Connect with us

Robotics

Soft Robot Sweats to Regulate Temperature

Published

 on

Soft Robot Sweats to Regulate Temperature

Researchers at Cornell University have developed a soft robotic muscle that is capable of regulating its temperature through sweating. The new development is one of many which are transforming the soft robotics field.

The thermal management technique is a fundamental part of creating untethered, high-powered robots that are able to operate for long periods of time without overheating. 

The project was led by Rob Shepherd, an associate professor of mechanical and aerospace engineering at Cornell. 

The team’s paper titled “Automatic Perspiration in 3D Printed Hydrogel Actuators” was published in Science Robotics.

One of the most difficult aspects of developing enduring, adaptable and agile robots is managing the robots’ internal temperature. According to Shepherd, the robot will malfunction or stop completely if the high-torque density motors and exothermic engines responsible for powering a robot overheat.

This problem is especially present in soft robots since they are made of synthetic material. Soft robots are more flexible, but this increased flexibility causes them to hold heat. This is not the case for metals, which dissipate heat much faster. The problem with an internal cooling technology, such as a fan, is that it would take up too much space inside the robot and increase the weight. 

With these challenges in mind, Shepherd’s team looked towards mammals and their natural ability to sweat as inspiration for a cooling system.

“The ability to perspire is one of the most remarkable features of humans,” said co-lead author T.J. Wallin, a research scientist at Facebook Reality Labs. “Sweating takes advantage of evaporated water loss to rapidly dissipate heat and can cool below the ambient environmental temperature. … So as is often the case, biology provided an excellent guide for us as engineers.”

Shepherd’s team partnered with the lab of Cornell engineering professor Emmanual Giannelis. Together, they created nanopolymer materials needed for sweating. They developed these using a 3D-printing technique called multi-material stereolithography, which relies on light to cure resin into pre-designed shapes. 

The researchers then fabricated fingerlike actuators that were composed of two hydrogel materials able to retain water and respond to temperature. Another way of looking at it is that these were “smart” sponges. The base layer consists of poly-N-isopropylacrylamide, which reacts to temperatures above 30°C (86°F) by shrinking. This reaction squeezes water up into a top layer of polyacrylamide that is perforated with micron-sized pores. The pores react to the same temperature range, and they release the “sweat” by automatically dilating before closing when the temperature drops below 30°C.

When the water evaporates, the actuator’s surface temperature is reduced by 21°C within 30 seconds. This cooling process is three times more efficient than the one in humans, according to the researchers. When exposed to wind from a fan, the actuators can cool off about six times faster.

One of the issues with the technology is that it can affect a robot’s mobility. The robots are also required to replenish their water supply. Because of this, Shepherd envisions soft robots that eventually will both perspire and drink like mammals. 

The new development of this technology follows a very apparent pattern within the robotics industry. Technology is increasingly being developed based on our natural environment. Whether it’s the cooling process of sweating present in mammals, neural networks based on moon jellyfish, or artificial skin, robotics is a field that in many ways builds on what we already have in nature.

 

Spread the love

Robotics

Researchers Improve Robotic Arm Used in Surgery

Published

on

Researchers Improve Robotic Arm Used in Surgery

Robotic surgery is continuing to become more advanced and precise, especially with recent developments from scientists at Tokyo Institute of Technology. They have created a new type of controller for the robotic arm that is used in surgery. The controller aims to make the job of the surgeon easier while providing excellent precision, and it does this by combining the two major types of gripping that are used in commercially available robotic systems. 

The past ten years have seen major advancements within robot-assisted surgery, and the technology is present in nearly all subspecialties. The robot systems that are used in robot-assisted surgery often include a controller device that is manipulated by the surgeon to control a robotic arm. The dexterity and precision of surgeons are improved through these systems. They scale the hand motions into smaller movements and can filter out hand tremors. Common surgical complications such as surgical site infection are reduced by the systems as well. 

Robot-assisted surgery does have disadvantages, and certain problems arise for those individuals who perform the surgery. Often times, robotic surgeons suffer from physical discomfort during procedures and finger fatigue sets in. These problems are due to the way in which the controller is gripped. The two major types of grips that are often used to control surgical robots are the “pinch grip” and “power grip.” The pinch grip, which has been used in conventional surgeries for centuries, is when the thumb, middle, and index fingers are used to complete high-precision movements. The power grip is when a handle is grabbed with the entire hand, and it is often used for large movements.

The pinch grip often causes fatigue due to the tension it puts on certain hand and finger muscles, and the power grip is less precise. Because of this, neither one is the perfect option. 

The newly published study in The International Journal of Medical Robotics and Computer Assisted Surgery, put forward by Mr. Solmon Jeong and Dr. Kotaro Tadano from Tokyo Institute of Technology (Tokyo Tech), highlights a new solution. 

The researchers developed a new controller that combines those two different types of gripping. According to Dr. Tadano, “In robotic surgery, the limitations of the two conventional gripping methods are strongly related to the advantages and disadvantages of each gripping type. Thus, we wanted to investigate whether a combined gripping method can improve the manipulation performance during robotic surgery, as this can leverage the advantages of both gripping types while compensating for their disadvantages.”

The researchers received promising results from the proof-of-concept experiment. They proceeded to design a robotic surgery system with a modular controller capable of being adjusted to three different types of grips: pinch, power, or combined gripping. The results showed that the combined gripping method performed better in many ways, including the number of failures, the time required, and overall length of the movements performed to reach the targets. The combined gripping method was also easier and more comfortable to use, according to many involved in the experiment. 

“The manipulating method of master controllers for robotic surgery has a significant influence in terms of intuitiveness, comfort, precision, and stability. In addition to enabling precise operation, a comfortable manipulating method could potentially benefit both the patient and the surgeon,” said Dr. Tadano.

The new developments will be critical in advancing robotic surgery, and they will further close the gap between human and robot within the industry.

 

Spread the love
Continue Reading

Robotics

Facebook Creates Method May Allow AI Robots To Navigate Without Map

mm

Published

on

Facebook Creates Method May Allow AI Robots To Navigate Without Map

Facebook has recently created an algorithm that enhances an AI agent’s ability to navigate an environment, letting the agent determine the shortest route through new environments without access to a map. While mobile robots typically have a map programmed into them, the new algorithm that Facebook designed could enable the creation of robots that can navigate environments without the need for maps.

According to a post created by Facebook researchers, a major challenge for robot navigation is endowing AI systems with the ability to navigate through novel environments and reaching programmed destinations without a map. In order to tackle this challenge, Facebook created a reinforcement learning algorithm distributed across multiple learners. The algorithm was called decentralized distributed proximal policy optimization (DD-PPO). DD-PPO was given only compass data, GPS data, and access to an RGB-D camera, but was reportedly able to navigate a virtual environment and get to a goal without any map data.

According to the researchers, the agents were trained in virtual environments like office buildings and houses. The resulting algorithm was capable of navigating a simulated indoor environment, choosing the correct fork in a path, and quickly recovering from errors if it chose the wrong path. The virtual environment results were promising, and it’s important that the agents are able to reliably navigate these common environments, as in the real world an agent could damage itself or its surroundings if it fails.

The Facebook research team explained that the focus of their project was assistive robots, as proper, reliable navigation for assistive robots and AI agents is essential. The research team explained that navigation is essential for a wide variety of assistive AI systems, from robots that perform tasks around the house to AI-driven devices that help people with visual impairments. The research team also argued that AI creators should move away from map usage in general, as maps are often outdated as soon as they are drawn, and in the real world environments, they are constantly changing and evolving.

As TechExplore reported, the Facebook research team made use of the open-source AI Habitat platform, which enabled them to train embodied agents in photorealistic 3-D environments in a timely fashion. Haven provided access to a set of simulated environments, and these environments are realistic enough that the data generated by the AI model can be applied ot real-world cases. Douglas Heaven in MIT Technology Review explained the intensity of the model’s training:

“Facebook trained bots for three days inside AI Habitat, a photorealistic virtual mock-up of the interior of a building, with rooms and corridors and furniture. In that time they took 2.5 billion steps—the equivalent of 80 years of human experience.”

Due to the sheer complexity of the training task, the researchers reportedly culled the weak learners as the training continued in order to speed up training time. The research team hopes to take their current model further and go on to create algorithms that can navigate complex environments using only camera data, dropping the GPS data and compass. The reason for this is that GPS data and compass data can often be thrown off indoors, be too noisy, or just be unavailable.

While the technology has yet to be tested outdoors and has trouble navigating over long-distances, the development of the algorithm is an important step in the development of the next generation of robots, especially delivery drones, and robots that operate in offices or homes.

Spread the love
Continue Reading

Robotics

Scientists Repurpose Living Frog Cells to Develop World’s First Living Robot

Published

on

Scientists Repurpose Living Frog Cells to Develop World's First Living Robot

In what is a remarkable cross between biological life and robotics, a team of scientists has repurposed living frog cells and used them to develop “xenobots.” The cells came from frog embryos, and the xenobots are just a millimeter wide. They are capable of moving towards a target, possibly pick up a payload such as medicine for the inside of a human body, and heal themselves after being cut or damaged. 

“These are novel living machines,” according to Joshua Bongard, a computer scientist and robotics expert at the University of Vermont who co-led the new research. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”

The scientists designed the bots on a supercomputer at the University of Vermont, and a group of biologists at Tufts University assembled and tested them. 

“We can imagine many useful applications of these living robots that other machines can’t do,” says co-leader Michael Levin who directs the Center for Regenerative and Developmental Biology at Tufts, “like searching out nasty compounds or radioactive contamination, gathering microplastic in the oceans, traveling in arteries to scrape out plaque.”

The research was published in the Proceedings of the National Academy of Sciences on January 13.

According to the team, this is the first time ever that research “designs completely biological machines from the ground up.”

It took months of processing time on the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core. The team included lead author and doctoral student Sam Kriegman, and they relied on an evolutionary algorithm to develop thousands of different designs for the new life-forms. 

When the computer was tasked with completing a task given by the scientists, such as locomotion in one direction, it would continuously reassemble a few hundred simulated cells into different forms and body shapes. As the programs ran, the most successful simulated organisms were kept and refined. The algorithm ran independently a hundred times, and the best designs were picked for testing.

The team at Tufts, led by Levin and with the help of microsurgeon Douglas Blackiston, then took up the project. They transferred the designs into the next stage, which was life. The team gathered stem cells that were harvested from the embryos of African frogs, the species Xenopus laevis. Single cells were then separated out and left to incubate. The team used tiny forceps and an electrode to cut the cells and join them under a microscope into the designs created by the computer.

The cells were assembled into all-new body forms, and they began to work together. The skin cells developed into a more passive build and the heart muscle cells were responsible for creating ordered forward motion as guided by the computer’s design. The robots were able to move on their own because of the spontaneous self-organizing patterns.

The organisms were capable of moving in a coherent way, and they lasted days or weeks exploring their watery environment. They relied on embryonic energy stores, but they failed once flipped over on their backs. 

“It’s a step toward using computer-designed organisms for intelligent drug delivery,” says Bongard, a professor in UVM’s Department of Computer Science and Complex Systems Center.

Since the xenobots are living technologies, they have certain advantages. 

“The downside of living tissue is that it’s weak and it degrades,” says Bongard. “That’s why we use steel. But organisms have 4.5 billion years of practice at regenerating themselves and going on for decades. These xenobots are fully biodegradable,” he continues. “When they’re done with their job after seven days, they’re just dead skin cells.”

These developments will have big implications for the future. 

“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” says Levin. “Much of science is focused on controlling the low-level rules. We also need to understand the high-level rules. If you wanted an anthill with two chimneys instead of one, how do you modify the ants? We’d have no idea.”

“I think it’s an absolute necessity for society going forward to get a better handle on systems where the outcome is very complex. A first step towards doing that is to explore: how do living systems decide what an overall behavior should be and how do we manipulate the pieces to get the behaviors we want?”

“This study is a direct contribution to getting a handle on what people are afraid of, which is unintended consequences, whether in the rapid arrival of self-driving cars, changing gene drives to wipe out whole lineages of viruses, or the many other complex and autonomous systems that will increasingly shape the human experience.”

“There’s all of this innate creativity in life,” says UVM’s Josh Bongard. “We want to understand that more deeply — and how we can direct and push it toward new forms.”

 

Spread the love
Continue Reading