Connect with us

Robotics

E-Skin Based on Human Sensory Nervous System

Published

 on

E-Skin Based on Human Sensory Nervous System

A new artificial nervous system has been developed by Assistant Professor Benjamin Tee and  researchers at the National University of Singapore (NUS). The new technology was created by Tee and a group of researchers and scientists from the Department of Materials Science and Engineering at the NUS Faculty of Engineering. Tee and his team have been developing the new e-skin for one and a half years, and he himself has been working on the technology for many more. It has been his goal to create a technology that would help robots and prosthetics have a better sense of touch. 

The new system is called the Asynchronous Coded Electronic Skin (ACES), the research and developments were published in Science Robotics on July 18, 2019. Before this development, electronic skins had lots of wires and were prone to damage. 

The new e-skin has the same sense of touch as human skin, and it is ultra-high responsive. It is also able to deal with damage better than real skin. Tee explained the technology and how in some ways, it is superior to the skin we have on our bodies. 

“Humans use our sense of touch to accomplish every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well.” 

The researchers at the university used the human sensory nervous system as a model. The ACES electronic nervous system detects signals similarly to the human nervous system, but the ACES has a network of sensors and a single electrical conductor. This is different than the nerve groupings in human skin. Professor Tee spoke on the reasons why the e-skin is modeled after the human sensory nervous system. 

“The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.” 

The ACES system is faster than the human sensory nervous system. It can detect touches more than 1,000 times faster. It can also differentiate physical contacts between different sensors. It is able to do this in less than 60 nanoseconds, the fastest ever for e-skin technology. Another advantage of the e-skin is that it is able to tell the shape, hardness, and texture of objects within 10 milliseconds. It is able to do this because of the high fidelity and capture speed. Like real human skin, the new e-skin is robust to physical damage. Skin, human or electronic, is in constant interaction with the outside environment, this causes a lot of damage. Because the ACES system uses an electrical conductor with independent sensors, the skins are still functional as long as there is at least one connection between the sensor and the conductor. 

This new development in technology can be used in a lot of different areas like Artificial Intelligence applications such as prosthetic limbs and human machine interfaces. Professor Tee talked about the possibilities. 

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices. ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes.” 

This technology will continue to develop, and It’s another field that is moving rapidly and producing amazing opportunities. It will be especially useful in robotics as that is where most e-skin is currently applied.

 

 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Robotics

Soft Robot Sweats to Regulate Temperature

Published

on

Soft Robot Sweats to Regulate Temperature

Researchers at Cornell University have developed a soft robotic muscle that is capable of regulating its temperature through sweating. The new development is one of many which are transforming the soft robotics field.

The thermal management technique is a fundamental part of creating untethered, high-powered robots that are able to operate for long periods of time without overheating. 

The project was led by Rob Shepherd, an associate professor of mechanical and aerospace engineering at Cornell. 

The team’s paper titled “Automatic Perspiration in 3D Printed Hydrogel Actuators” was published in Science Robotics.

One of the most difficult aspects of developing enduring, adaptable and agile robots is managing the robots’ internal temperature. According to Shepherd, the robot will malfunction or stop completely if the high-torque density motors and exothermic engines responsible for powering a robot overheat.

This problem is especially present in soft robots since they are made of synthetic material. Soft robots are more flexible, but this increased flexibility causes them to hold heat. This is not the case for metals, which dissipate heat much faster. The problem with an internal cooling technology, such as a fan, is that it would take up too much space inside the robot and increase the weight. 

With these challenges in mind, Shepherd’s team looked towards mammals and their natural ability to sweat as inspiration for a cooling system.

“The ability to perspire is one of the most remarkable features of humans,” said co-lead author T.J. Wallin, a research scientist at Facebook Reality Labs. “Sweating takes advantage of evaporated water loss to rapidly dissipate heat and can cool below the ambient environmental temperature. … So as is often the case, biology provided an excellent guide for us as engineers.”

Shepherd’s team partnered with the lab of Cornell engineering professor Emmanual Giannelis. Together, they created nanopolymer materials needed for sweating. They developed these using a 3D-printing technique called multi-material stereolithography, which relies on light to cure resin into pre-designed shapes. 

The researchers then fabricated fingerlike actuators that were composed of two hydrogel materials able to retain water and respond to temperature. Another way of looking at it is that these were “smart” sponges. The base layer consists of poly-N-isopropylacrylamide, which reacts to temperatures above 30°C (86°F) by shrinking. This reaction squeezes water up into a top layer of polyacrylamide that is perforated with micron-sized pores. The pores react to the same temperature range, and they release the “sweat” by automatically dilating before closing when the temperature drops below 30°C.

When the water evaporates, the actuator’s surface temperature is reduced by 21°C within 30 seconds. This cooling process is three times more efficient than the one in humans, according to the researchers. When exposed to wind from a fan, the actuators can cool off about six times faster.

One of the issues with the technology is that it can affect a robot’s mobility. The robots are also required to replenish their water supply. Because of this, Shepherd envisions soft robots that eventually will both perspire and drink like mammals. 

The new development of this technology follows a very apparent pattern within the robotics industry. Technology is increasingly being developed based on our natural environment. Whether it’s the cooling process of sweating present in mammals, neural networks based on moon jellyfish, or artificial skin, robotics is a field that in many ways builds on what we already have in nature.

 

Spread the love
Continue Reading

Robotics

Facebook Creates Method May Allow AI Robots To Navigate Without Map

mm

Published

on

Facebook Creates Method May Allow AI Robots To Navigate Without Map

Facebook has recently created an algorithm that enhances an AI agent’s ability to navigate an environment, letting the agent determine the shortest route through new environments without access to a map. While mobile robots typically have a map programmed into them, the new algorithm that Facebook designed could enable the creation of robots that can navigate environments without the need for maps.

According to a post created by Facebook researchers, a major challenge for robot navigation is endowing AI systems with the ability to navigate through novel environments and reaching programmed destinations without a map. In order to tackle this challenge, Facebook created a reinforcement learning algorithm distributed across multiple learners. The algorithm was called decentralized distributed proximal policy optimization (DD-PPO). DD-PPO was given only compass data, GPS data, and access to an RGB-D camera, but was reportedly able to navigate a virtual environment and get to a goal without any map data.

According to the researchers, the agents were trained in virtual environments like office buildings and houses. The resulting algorithm was capable of navigating a simulated indoor environment, choosing the correct fork in a path, and quickly recovering from errors if it chose the wrong path. The virtual environment results were promising, and it’s important that the agents are able to reliably navigate these common environments, as in the real world an agent could damage itself or its surroundings if it fails.

The Facebook research team explained that the focus of their project was assistive robots, as proper, reliable navigation for assistive robots and AI agents is essential. The research team explained that navigation is essential for a wide variety of assistive AI systems, from robots that perform tasks around the house to AI-driven devices that help people with visual impairments. The research team also argued that AI creators should move away from map usage in general, as maps are often outdated as soon as they are drawn, and in the real world environments, they are constantly changing and evolving.

As TechExplore reported, the Facebook research team made use of the open-source AI Habitat platform, which enabled them to train embodied agents in photorealistic 3-D environments in a timely fashion. Haven provided access to a set of simulated environments, and these environments are realistic enough that the data generated by the AI model can be applied ot real-world cases. Douglas Heaven in MIT Technology Review explained the intensity of the model’s training:

“Facebook trained bots for three days inside AI Habitat, a photorealistic virtual mock-up of the interior of a building, with rooms and corridors and furniture. In that time they took 2.5 billion steps—the equivalent of 80 years of human experience.”

Due to the sheer complexity of the training task, the researchers reportedly culled the weak learners as the training continued in order to speed up training time. The research team hopes to take their current model further and go on to create algorithms that can navigate complex environments using only camera data, dropping the GPS data and compass. The reason for this is that GPS data and compass data can often be thrown off indoors, be too noisy, or just be unavailable.

While the technology has yet to be tested outdoors and has trouble navigating over long-distances, the development of the algorithm is an important step in the development of the next generation of robots, especially delivery drones, and robots that operate in offices or homes.

Spread the love
Continue Reading

Robotics

Scientists Repurpose Living Frog Cells to Develop World’s First Living Robot

Published

on

Scientists Repurpose Living Frog Cells to Develop World's First Living Robot

In what is a remarkable cross between biological life and robotics, a team of scientists has repurposed living frog cells and used them to develop “xenobots.” The cells came from frog embryos, and the xenobots are just a millimeter wide. They are capable of moving towards a target, possibly pick up a payload such as medicine for the inside of a human body, and heal themselves after being cut or damaged. 

“These are novel living machines,” according to Joshua Bongard, a computer scientist and robotics expert at the University of Vermont who co-led the new research. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”

The scientists designed the bots on a supercomputer at the University of Vermont, and a group of biologists at Tufts University assembled and tested them. 

“We can imagine many useful applications of these living robots that other machines can’t do,” says co-leader Michael Levin who directs the Center for Regenerative and Developmental Biology at Tufts, “like searching out nasty compounds or radioactive contamination, gathering microplastic in the oceans, traveling in arteries to scrape out plaque.”

The research was published in the Proceedings of the National Academy of Sciences on January 13.

According to the team, this is the first time ever that research “designs completely biological machines from the ground up.”

It took months of processing time on the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core. The team included lead author and doctoral student Sam Kriegman, and they relied on an evolutionary algorithm to develop thousands of different designs for the new life-forms. 

When the computer was tasked with completing a task given by the scientists, such as locomotion in one direction, it would continuously reassemble a few hundred simulated cells into different forms and body shapes. As the programs ran, the most successful simulated organisms were kept and refined. The algorithm ran independently a hundred times, and the best designs were picked for testing.

The team at Tufts, led by Levin and with the help of microsurgeon Douglas Blackiston, then took up the project. They transferred the designs into the next stage, which was life. The team gathered stem cells that were harvested from the embryos of African frogs, the species Xenopus laevis. Single cells were then separated out and left to incubate. The team used tiny forceps and an electrode to cut the cells and join them under a microscope into the designs created by the computer.

The cells were assembled into all-new body forms, and they began to work together. The skin cells developed into a more passive build and the heart muscle cells were responsible for creating ordered forward motion as guided by the computer’s design. The robots were able to move on their own because of the spontaneous self-organizing patterns.

The organisms were capable of moving in a coherent way, and they lasted days or weeks exploring their watery environment. They relied on embryonic energy stores, but they failed once flipped over on their backs. 

“It’s a step toward using computer-designed organisms for intelligent drug delivery,” says Bongard, a professor in UVM’s Department of Computer Science and Complex Systems Center.

Since the xenobots are living technologies, they have certain advantages. 

“The downside of living tissue is that it’s weak and it degrades,” says Bongard. “That’s why we use steel. But organisms have 4.5 billion years of practice at regenerating themselves and going on for decades. These xenobots are fully biodegradable,” he continues. “When they’re done with their job after seven days, they’re just dead skin cells.”

These developments will have big implications for the future. 

“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” says Levin. “Much of science is focused on controlling the low-level rules. We also need to understand the high-level rules. If you wanted an anthill with two chimneys instead of one, how do you modify the ants? We’d have no idea.”

“I think it’s an absolute necessity for society going forward to get a better handle on systems where the outcome is very complex. A first step towards doing that is to explore: how do living systems decide what an overall behavior should be and how do we manipulate the pieces to get the behaviors we want?”

“This study is a direct contribution to getting a handle on what people are afraid of, which is unintended consequences, whether in the rapid arrival of self-driving cars, changing gene drives to wipe out whole lineages of viruses, or the many other complex and autonomous systems that will increasingly shape the human experience.”

“There’s all of this innate creativity in life,” says UVM’s Josh Bongard. “We want to understand that more deeply — and how we can direct and push it toward new forms.”

 

Spread the love
Continue Reading