Connect with us

Robotics

Engineers Working on Two-Legged, Humanoid Robot

Published

 on

Engineers Working on Two-Legged, Humanoid Robot

Engineers are currently working on developing a two-legged, humanoid robot that is capable of exerting force and pushing against something while keeping its balance. A team at MIT and the University of Illinois at Urbana-Champaign have successfully found a way to control the balance in a teleoperated robot. This will play a critical role in getting humanoids to complete high-impact tasks in difficult environments. 

The robot that was created by the team has a “machined torso and two legs.” They can control the robot remotely through a human that wears a vest transmitting information regarding the human’s motion and ground reaction forces. 

The human operator is able to control the robot’s locomotion, and the vest also allows the human to feel the robot’s motions. For example, the human will feel a pull on the vest if the robot begins to fall over or lose its balance. This allows the human to readjust and steady the robot. 

In the experiments and tests that were carried out, the humans were able to successfully hold the robot’s balance, even as it jumped and walked in place with the human. 

Joao Ramos, an MIT postdoc, developed the new approach. 

“It’s like running with a heavy backpack — you can feel how the dynamics of the backpack move around you, and you can compensate properly,” he says. “Now if you want to open a heavy door, the human can command the robot to throw its body at the door and push it open, without losing balance.”

Ramos is currently an assistant professor at the University of Illinois at Urbana-Champaign, and he published the study in Science Robotics. Sangbae Kim, associate professor of mechanical engineering at MIT, is the co-author on the study. 

The research was also partly supported by Hon Hai Precision Industry Co, Ltd. and Naver Labs Corporation. 

Prior Work

Kim and Ramos previously developed the two-legged robot HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System), and they also worked on methods that made it capable of mimicking an operator via teleoperation. According to the researchers, this way of operation has humanistic advantages. 

“Because you have a person who can learn and adapt on the fly, a robot can perform motions that it’s never practiced before [via teleoperation],” Ramos says.

HERMES was able to do certain actions such as pouring coffee into a cup, chopping wood with an ax, and using an extinguisher to put out a fire. These actions require the use of the robot’s upper body, and the algorithms match the robot’s limb positioning with the operator’s. The only reason HERMES was able to perform high-impact actions was because it was set in place, which makes it much easier to maintain balance. Taking any steps would have likely resulted in the robot falling over. 

“We realized in order to generate high forces or move heavy objects, just copying motions wouldn’t be enough, because the robot would fall easily,” Kim says. “We needed to copy the operator’s dynamic balance.”

Little Hermes

The team developed Little HERMES, which is a miniature version of the original. The robot is about a third of the size of an average human adult. It was created with a torso and two legs, and it was specifically developed to test actions that rely on the lower-body, including locomotion and balance. 

Little HERMES utilizes teleoperation, and the operator wears a vest that is used to control the robot. 

Mimicking the human’s motions was one thing, but mimicking the balance is more difficult. The team identified balance as containing two main aspects, a person’s center of mass and their center of pressure. 

Ramos found that the balance of a person is determined by the center of mass in relation to the center of pressure. 

After condensing the data and developing several models, they began to conduct tests. They eventually found a model to use on Little HERMES.

Little HERMES was able to be controlled through the vest, and Ramos could feel the robot’s motions. One of the tests involved Little HERMES being struck by a hammer, and Ramos felt the vest jerk in the direction that the robot moved. As Ramos resisted the movement, the robot followed, allowing it to keep its balance and avoid falling over.  

Kim and Ramos plan to keep working on developing a full-body humanoid. They hope that one day it will be able to operate in a disaster zone, helping aid in rescue missions. 

“Now we can do heavy door opening or lifting or throwing heavy objects, with proper balance communication,” Kim says.

 

Spread the love

Robotics

Soft Robot Sweats to Regulate Temperature

Published

on

Soft Robot Sweats to Regulate Temperature

Researchers at Cornell University have developed a soft robotic muscle that is capable of regulating its temperature through sweating. The new development is one of many which are transforming the soft robotics field.

The thermal management technique is a fundamental part of creating untethered, high-powered robots that are able to operate for long periods of time without overheating. 

The project was led by Rob Shepherd, an associate professor of mechanical and aerospace engineering at Cornell. 

The team’s paper titled “Automatic Perspiration in 3D Printed Hydrogel Actuators” was published in Science Robotics.

One of the most difficult aspects of developing enduring, adaptable and agile robots is managing the robots’ internal temperature. According to Shepherd, the robot will malfunction or stop completely if the high-torque density motors and exothermic engines responsible for powering a robot overheat.

This problem is especially present in soft robots since they are made of synthetic material. Soft robots are more flexible, but this increased flexibility causes them to hold heat. This is not the case for metals, which dissipate heat much faster. The problem with an internal cooling technology, such as a fan, is that it would take up too much space inside the robot and increase the weight. 

With these challenges in mind, Shepherd’s team looked towards mammals and their natural ability to sweat as inspiration for a cooling system.

“The ability to perspire is one of the most remarkable features of humans,” said co-lead author T.J. Wallin, a research scientist at Facebook Reality Labs. “Sweating takes advantage of evaporated water loss to rapidly dissipate heat and can cool below the ambient environmental temperature. … So as is often the case, biology provided an excellent guide for us as engineers.”

Shepherd’s team partnered with the lab of Cornell engineering professor Emmanual Giannelis. Together, they created nanopolymer materials needed for sweating. They developed these using a 3D-printing technique called multi-material stereolithography, which relies on light to cure resin into pre-designed shapes. 

The researchers then fabricated fingerlike actuators that were composed of two hydrogel materials able to retain water and respond to temperature. Another way of looking at it is that these were “smart” sponges. The base layer consists of poly-N-isopropylacrylamide, which reacts to temperatures above 30°C (86°F) by shrinking. This reaction squeezes water up into a top layer of polyacrylamide that is perforated with micron-sized pores. The pores react to the same temperature range, and they release the “sweat” by automatically dilating before closing when the temperature drops below 30°C.

When the water evaporates, the actuator’s surface temperature is reduced by 21°C within 30 seconds. This cooling process is three times more efficient than the one in humans, according to the researchers. When exposed to wind from a fan, the actuators can cool off about six times faster.

One of the issues with the technology is that it can affect a robot’s mobility. The robots are also required to replenish their water supply. Because of this, Shepherd envisions soft robots that eventually will both perspire and drink like mammals. 

The new development of this technology follows a very apparent pattern within the robotics industry. Technology is increasingly being developed based on our natural environment. Whether it’s the cooling process of sweating present in mammals, neural networks based on moon jellyfish, or artificial skin, robotics is a field that in many ways builds on what we already have in nature.

 

Spread the love
Continue Reading

Robotics

Facebook Creates Method May Allow AI Robots To Navigate Without Map

mm

Published

on

Facebook Creates Method May Allow AI Robots To Navigate Without Map

Facebook has recently created an algorithm that enhances an AI agent’s ability to navigate an environment, letting the agent determine the shortest route through new environments without access to a map. While mobile robots typically have a map programmed into them, the new algorithm that Facebook designed could enable the creation of robots that can navigate environments without the need for maps.

According to a post created by Facebook researchers, a major challenge for robot navigation is endowing AI systems with the ability to navigate through novel environments and reaching programmed destinations without a map. In order to tackle this challenge, Facebook created a reinforcement learning algorithm distributed across multiple learners. The algorithm was called decentralized distributed proximal policy optimization (DD-PPO). DD-PPO was given only compass data, GPS data, and access to an RGB-D camera, but was reportedly able to navigate a virtual environment and get to a goal without any map data.

According to the researchers, the agents were trained in virtual environments like office buildings and houses. The resulting algorithm was capable of navigating a simulated indoor environment, choosing the correct fork in a path, and quickly recovering from errors if it chose the wrong path. The virtual environment results were promising, and it’s important that the agents are able to reliably navigate these common environments, as in the real world an agent could damage itself or its surroundings if it fails.

The Facebook research team explained that the focus of their project was assistive robots, as proper, reliable navigation for assistive robots and AI agents is essential. The research team explained that navigation is essential for a wide variety of assistive AI systems, from robots that perform tasks around the house to AI-driven devices that help people with visual impairments. The research team also argued that AI creators should move away from map usage in general, as maps are often outdated as soon as they are drawn, and in the real world environments, they are constantly changing and evolving.

As TechExplore reported, the Facebook research team made use of the open-source AI Habitat platform, which enabled them to train embodied agents in photorealistic 3-D environments in a timely fashion. Haven provided access to a set of simulated environments, and these environments are realistic enough that the data generated by the AI model can be applied ot real-world cases. Douglas Heaven in MIT Technology Review explained the intensity of the model’s training:

“Facebook trained bots for three days inside AI Habitat, a photorealistic virtual mock-up of the interior of a building, with rooms and corridors and furniture. In that time they took 2.5 billion steps—the equivalent of 80 years of human experience.”

Due to the sheer complexity of the training task, the researchers reportedly culled the weak learners as the training continued in order to speed up training time. The research team hopes to take their current model further and go on to create algorithms that can navigate complex environments using only camera data, dropping the GPS data and compass. The reason for this is that GPS data and compass data can often be thrown off indoors, be too noisy, or just be unavailable.

While the technology has yet to be tested outdoors and has trouble navigating over long-distances, the development of the algorithm is an important step in the development of the next generation of robots, especially delivery drones, and robots that operate in offices or homes.

Spread the love
Continue Reading

Robotics

Scientists Repurpose Living Frog Cells to Develop World’s First Living Robot

Published

on

Scientists Repurpose Living Frog Cells to Develop World's First Living Robot

In what is a remarkable cross between biological life and robotics, a team of scientists has repurposed living frog cells and used them to develop “xenobots.” The cells came from frog embryos, and the xenobots are just a millimeter wide. They are capable of moving towards a target, possibly pick up a payload such as medicine for the inside of a human body, and heal themselves after being cut or damaged. 

“These are novel living machines,” according to Joshua Bongard, a computer scientist and robotics expert at the University of Vermont who co-led the new research. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”

The scientists designed the bots on a supercomputer at the University of Vermont, and a group of biologists at Tufts University assembled and tested them. 

“We can imagine many useful applications of these living robots that other machines can’t do,” says co-leader Michael Levin who directs the Center for Regenerative and Developmental Biology at Tufts, “like searching out nasty compounds or radioactive contamination, gathering microplastic in the oceans, traveling in arteries to scrape out plaque.”

The research was published in the Proceedings of the National Academy of Sciences on January 13.

According to the team, this is the first time ever that research “designs completely biological machines from the ground up.”

It took months of processing time on the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core. The team included lead author and doctoral student Sam Kriegman, and they relied on an evolutionary algorithm to develop thousands of different designs for the new life-forms. 

When the computer was tasked with completing a task given by the scientists, such as locomotion in one direction, it would continuously reassemble a few hundred simulated cells into different forms and body shapes. As the programs ran, the most successful simulated organisms were kept and refined. The algorithm ran independently a hundred times, and the best designs were picked for testing.

The team at Tufts, led by Levin and with the help of microsurgeon Douglas Blackiston, then took up the project. They transferred the designs into the next stage, which was life. The team gathered stem cells that were harvested from the embryos of African frogs, the species Xenopus laevis. Single cells were then separated out and left to incubate. The team used tiny forceps and an electrode to cut the cells and join them under a microscope into the designs created by the computer.

The cells were assembled into all-new body forms, and they began to work together. The skin cells developed into a more passive build and the heart muscle cells were responsible for creating ordered forward motion as guided by the computer’s design. The robots were able to move on their own because of the spontaneous self-organizing patterns.

The organisms were capable of moving in a coherent way, and they lasted days or weeks exploring their watery environment. They relied on embryonic energy stores, but they failed once flipped over on their backs. 

“It’s a step toward using computer-designed organisms for intelligent drug delivery,” says Bongard, a professor in UVM’s Department of Computer Science and Complex Systems Center.

Since the xenobots are living technologies, they have certain advantages. 

“The downside of living tissue is that it’s weak and it degrades,” says Bongard. “That’s why we use steel. But organisms have 4.5 billion years of practice at regenerating themselves and going on for decades. These xenobots are fully biodegradable,” he continues. “When they’re done with their job after seven days, they’re just dead skin cells.”

These developments will have big implications for the future. 

“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” says Levin. “Much of science is focused on controlling the low-level rules. We also need to understand the high-level rules. If you wanted an anthill with two chimneys instead of one, how do you modify the ants? We’d have no idea.”

“I think it’s an absolute necessity for society going forward to get a better handle on systems where the outcome is very complex. A first step towards doing that is to explore: how do living systems decide what an overall behavior should be and how do we manipulate the pieces to get the behaviors we want?”

“This study is a direct contribution to getting a handle on what people are afraid of, which is unintended consequences, whether in the rapid arrival of self-driving cars, changing gene drives to wipe out whole lineages of viruses, or the many other complex and autonomous systems that will increasingly shape the human experience.”

“There’s all of this innate creativity in life,” says UVM’s Josh Bongard. “We want to understand that more deeply — and how we can direct and push it toward new forms.”

 

Spread the love
Continue Reading