Connect with us

Robotics

Robots Walk Faster With Newly Developed Flexible Feet

Published

 on

Roboticists at the University of California San Diego have developed flexible feet for robots. The new technology can result in robots walking 40 percent faster on uneven terrains like pebbles and wood chips. 

The new development is important for a variety of different applications, especially search-and-rescue missions. 

The research will be presented at the RoboSoft conference, which will be virtual and take place between May 15 and July 15, 2020. 

Emily Lathrop is a Ph.D. student at the Jacobs School of Engineering at UC San Diego and the first author of the paper.

Robots need to be able to walk fast and efficiently on natural, uneven terrain so they can go everywhere humans can go, but maybe shouldn’t,” Lathrop said. 

Michael T. Tolley is a professor in the Department of Mechanical and Aerospace Engineering at UC San Diego. He is the senior author of the paper. 

“Usually, robots are only able to control motion at specific joints,” said Tolley. “In this work, we showed that a robot that can control the stiffness, and hence the shape, of its feet outperforms traditional designs and is able to adapt to a wide variety of terrains.”

Flexible Robotic Feet

The flexible robotic feet consist of a latex membrane that is filled with coffee grounds. The coffee grounds are able to go back and forth between acting as a solid and a liquid. The mechanism that allows granular media, such as the coffee grounds, to act this way is called granular jamming. As a result, the robots can walk faster and have a better grip.

As the robot feet touch the ground, they turn firm and conform to the surface in order to establish solid footing. When they move, the feet unjam and loosen up between steps, and support structures are relied on to help them stay flexible while jammed. 

These flexible feet were the first of their kind to be tested on uneven surfaces. 

The researchers installed the feet on a hexapod robot, and they designed and built an on-board system. The on-board system is capable of generating negative pressure and positive pressure in order to unjam and jam the feet between each step. In order to jam the feet, a vacuum pump removes air between the coffee grounds. They can also be passively jammed if the weight of the robot forces the air out from between the coffee grounds. 

Uneven Surfaces

The robot was tested walking on a variety of different surfaces, including flat ground, wood chips, and pebbles, with and without the flexible feet. The findings were that passive jamming is most effective on flat ground and active jamming is best on loose rocks. 

With the flexible feet, the robot’s legs were able to grip the ground better, which in turn increased its speed. This was especially true when the robot walked up sloped and uneven terrain.

Nick Gravish is a professor in the UC San Diego Department of Mechanical and Aerospace Engineering and study co-author. 

“The natural world is filled with challenging grounds for walking robots — slippery, rocky, and squishy substrates all make walking complicated,” said Gravish. “Feet that can adapt to these different types of ground can help robots improve mobility.”

The researchers will now attempt to incorporate soft sensors on the bottom of the feet, which will allow an electronic control board to be utilized. The electronic control board would identify the type of ground that the robot is going to walk over and if the feet need to be actively or passively jammed. The researchers will also continue to improve design and control algorithms for better efficiency. 

 

Spread the love

Robotics

Human Brain’s Light Processing Ability Could Lead to Better Robotic Sensing

Published

 on

The human brain often serves as inspiration for artificial intelligence (AI), and that is the case once again as a team of Army researchers has managed to improve robotic sensing by looking at how the human brain processes bright and contrasting light. The new development can help lead to the collaboration between autonomous agents and humans.

According to the researchers, it is important for machine sensing to be effective across changing environments, which leads to developments in autonomy.

The research was published in the Journal of Vision

100,000-to-1 Display Capability

Andre Harrison is a researcher at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. 

“When we develop machine learning algorithms, real-world images are usually compressed to a narrower range, as a cellphone camera does, in a process called tone mapping,” Harrison said. “This can contribute to the brittleness of machine vision algorithms because they are based on artificial images that don’t quite match the patterns we see in the real world.” 

The team of researchers developed a system with 100,000-to-1 display capability, which enabled them to gain insight into the brain’s computing process in the real-world. According to Harrison, this allowed the team to implement biological resilience into sensors.

The current vision algorithms still have a long way to go before becoming ideal. This has to do with the limited range in luminance, at around 100-to-1 ratio, due to the algorithms being based on human and animal studies with computer monitors. The 100-to-1 ratio is less-than-ideal in the real world, where the variation can go all the way up to 100,000-to-1. This high ratio is termed high dynamic range, or HDR.

Dr. Chou Po Hung is an Army researcher. 

“Changes and significant variations in light can challenge Army systems — drones flying under a forest canopy could be confused by reflectance changes when wind blows through the leaves, or autonomous vehicles driving on rough terrain might not recognize potholes nor other obstacles because the lighting conditions are slightly different from those of which their vision algorithms were trained,” Hung said. 

The Human Brain’s Compressing Capability

The human brain is capable of automatically compressing the 100,000-to-1 input into a narrower range, and this is what allows humans to interpret shape. The team of researchers set out to understand this process by studying early visual processing under HDR. The team looked toward simple features such as HDR luminance. 

“The brain has more than 30 visual areas, and we still have only a rudimentary understanding of how these areas process the eye’s image into an understanding of 3D shape,” Hung continued. “Our results with HDR luminance studies, based on human behavior and scalp recordings, show just how little we truly know about how to bridge the gap from laboratory to real-world environments. But, these findings break us out of that box, showing that our previous assumptions from standard computer monitors have limited ability to generalize to the real world, and they reveal principles that can guide our modeling toward the correct mechanisms.” 

By discovering how light and contrast edges interact in the brain’s visual representation, algorithms will be more effective at reconstructing the 3D world under real-world luminance. When estimating 3D shape from 2D information, there are always ambiguities, but this new discovery allows for them to be corrected.

“Through millions of years of evolution, our brains have evolved effective shortcuts for reconstructing 3D from 2D information,” Hung said. “It’s a decades-old problem that continues to challenge machine vision scientists, even with the recent advances in AI.”

The team’s discovery is also important for the development of AI-devices like radar and remote speech understanding, which utilize wide dynamic range sensing. 

“The issue of dynamic range is not just a sensing problem,” Hung said. “It may also be a more general problem in brain computation because individual neurons have tens of thousands of inputs. How do you build algorithms and architectures that can listen to the right inputs across different contexts? We hope that, by working on this problem at a sensory level, we can confirm that we are on the right track, so that we can have the right tools when we build more complex Als.” 

Spread the love
Continue Reading

Robotics

Researchers Develop First Microscopic Robots Capable of “Walking”

Published

 on

Image: Cornell University

In what is a breakthrough within the field of robotics, researchers have created the first microscopic robots capable of being controlled through their incorporated semiconductor components. The robots are able to “walk” with only standard electronic signals.

The microscopic robots are the size of a paramecium, and they will act as the foundation for further projects. Some of those could include complex versions with silicon-based intelligence, the mass production of such robots, and versions capable of moving through human tissue and blood.

The work was a collaboration led by Cornell University, which included Itai Cohen, professor of physics. Other members of the team included Paul McEuen, the John A. Newman Professor of Physical Science, as well as Marc Miskin, assistant professor at the University of Pennsylvania.

Their work was published Aug. 26 in Nature, titled “Electronically Integrated, Mass-Manufactured, Microscopic Robots.”

Previous Nanoscale Projects

The newly developed microscopic robots were built upon previous work done by Cohen and McEuen. Some of their previous nanoscale projects involved microscopic sensors and graphene-based origami machines. 

The new microscopic robots are approximately 5 microns thick, 40 microns wide, and anywhere between 40 to 70 microns in length. One micron is just one-millionth of a meter. 

Each robot has a simple circuit that is made from silicon photovoltaics and four electrochemical actuators. The silicon photovoltaics act as the torso and brain, while the electrochemical actuators act as the legs.

Controlling the Microscopic Robots

In order to control the robots, the researchers flash laser pulses at different photovoltaics, with each one making up a seperate set of legs. The robots are able to walk when the laser is toggled back and forth between the front and back photovoltaics.

The robots only operate at a low voltage of 200 millivolts, and they run on just 10 nanowatts of power. The material is strong for such a small object, and they are able to be fabricated parallel since they are constructed with standard lithographic processes. On just a four inch silicon wafer, there can be around 1 million bots.

The team is now looking at how to make the robots more powerful through electronics and onboard computation. 

It is possible that future versions of microrobots could act in swarms and complete tasks like restructuring materials, suturing blood vessels, or be sent to the human brain. 

“Controlling a tiny robot is maybe as close as you can come to shrinking yourself down. I think machines like these are going to take us into all kinds of amazing worlds that are too small to see,” said Miskin.

“This research breakthrough provides exciting scientific opportunity for investigating new questions relevant to the physics of active matter and may ultimately lead to futuristic robotic materials,” said Sam Stanton. 

Stanton is program manager for the Army Research Office, which supported the microscopic robot research. 

A video of Itai Cohen explaining the technology can be found here.

Spread the love
Continue Reading

Interviews

Jorgen Pedersen, President and CEO, RE2 Robotics – Interview Series

mm

Published

 on

Jorgen Pedersen, President and CEO, RE2 Robotics, a leading developer of intelligent mobile manipulation systems. The company is committed to creating manipulator arms with human-like performance, intuitive robot interfaces and advanced autonomy capabilities for use in any environment.Utilizing artificial intelligence, computer vision, and machine learning, the innovative robotic systems can operate with humans in the loop or autonomously depending upon the application.

What initially attracted you to robotics?

When I was in high-school, I thought my path in life was going to be art.  Then along came movies like “Top Gun” and “The Right Stuff,” which motivated me to want to first become a pilot, followed by an astronaut.  With my sights set, I focused more on math and science and applied to the Air Force Academy.  I didn’t get in.  It was a blow, but being driven by “cool factor” at age 18, I soon came up with the idea that making a humanoid robot go to space was the next best thing.  So I applied to engineering schools and was accepted to Carnegie Mellon University.  Once there, I soon found the Robotics Institute.  Within their walls, I saw some amazing robots at the time including “Ambler,” a really huge robot that was going to walk around Mars; I found Dante, a robot that was going to walk into a volcano; I found Navlab, one of the first robots to drive across America autonomously; and many more.  I was hooked at that point.  I knew what I wanted to do – build robots!

Before launching RE2 Robotics you were a member of the National Robotics Engineering Center (NREC) an operating unit within Carnegie Mellon University’s Robotics Institute (RI), the world’s largest robotics research and development organization. Could you discuss the experience of working there at such a pivotal time in robotics history?

It was an honor to be one of the original twelve people who opened the NREC’s doors and began the commercialization of robotics beyond the factory floor.  I was exposed to real-world problems, working with customers such as Caterpillar, New Holland (now Case New Holland), Joy Mining, and others to apply the fundamental robotics research coming out of the university and create proof that the world is ready for robotics.  It was up to us to dispel the skeptics and generate a cultural shift toward the perception of robotics beyond manufacturing and material handling.  There was something special about those early years and the people who launched the NREC into what it is today.  Yes, the early members were smart and motivated, but there was a level of fortitude that is rarely as pervasive in an organization as was seen then.  There was a contagious attitude of perseverance and resourcefulness that drove us to redesign the hardware, relook at the code, or improvise based on the limitations of the technology of that day.

When did the idea of launching RE2 Robotics originate?

I left the NREC in 2000 to join a robotics start-up company that was focused on industrial floor cleaning using robots.  Although I learned a lot about commercializing robots, the company struggled and I decided to set my sights elsewhere, but I was I was not sure where.  While I was figuring out what to do next, I decided to consult back to the NREC.  In order to do this, I formed a company called “Robotics Engineering Excellence,” or “RE2” for short, which incubated at the NREC for the first five years.  Little did I know that the “temporary” robotics engineering company that was formed in 2001 would turn into a leading provider of intelligent mobile manipulation solutions.

RE2 Robotics chose to focus exclusively on robotic arms instead of robots in general such as competitor Boston Dynamics. Why was this chosen as the core focus?

For the first five years of our existence, RE2 was truly a contract engineering firm that served the NREC, solving many hard problems, including the DARPA Perception for Off-Road Mobility program.   Beyond overcoming technical challenges, I knew that I wanted to see the robots that we were developing used in the real world.  With that, we won our first Small Business Innovation Research (SBIR) grant with the Department of Defense.  The topic was focused on making small, lightweight modular robotic arms for the Unmanned Ground Vehicles (UGVs) being used in Iraq and Afghanistan for remotely defusing IEDs.  The concept of saving lives through the use of robotics was appealing to me.  Additionally, there was an urgent need for this technology, meaning that I knew that we would be able to field our technology in the near term.  Finally, this initial SBIR program revealed a deficiency in the market – there were no strong mobile manipulation offerings.  Most mobile robotics companies were focused on moving through and perceiving the world.  Few were focused on physically interacting with the world.  Why?  Manipulating the physical world is difficult.  Robotic arms are typically an order of magnitude more complex than the mobile platform to which you mount it.  Over time, we have advanced the physical designs to near-human capability.  Today, we continue to advance the Artificial Intelligence (AI) and computer vision algorithms that allow for automation nearly anywhere on the planet.

One of the RE2 Robotics dual-arm systems is being used for cleanup at the Fukushima Dai-Ichi Nuclear Power Station in Japan. Could you discuss some of the unique challenges of designing a robot for this type of environment?

We actually never designed a robot for radiological environments.  We designed a human-like robotic system called Highly Dexterous Manipulation System (HDMS), which was intended to be placed on small UGVs used by Explosive Ordnance Disposal (EOD) team to defuse threats like IEDs.  The military requirements, however, dictated that HDMS needed to be able to withstand major temperature fluctuations, deal with inclement weather, handle shock and vibration, etc.  It turns out that the strict requirements imposed by the U.S. Army created a system that has been able to hold up even at Fukushima.  The system has been in operation for over a year and a half now.

RE2 Robotics designs impressive robots that can be used in the ocean to diffuse Waterborne Improvised Explosive Devices (WBIEDs) and mines. Outside of the corrosive effects of extended saltwater exposure, what are some of the other challenges that roboticists need to overcome?

Our Maritime Dexterous Manipulation System (MDMS) is designed to operate in both shallow and deep ocean waters.  Designing an electro-mechanical solution that works in deeper waters requires special engineering designs for dealing with the incredible pressures seen below 100 meters below sea level.  Additionally, since our arms are mounted on Unmanned Underwater Vehicles (UUVs), we had to consider the dynamic impacts that arms impart on the UUV while both swimming and physically interacting with the environment.  As a result, MDMS is neutrally buoyant, meaning that the arms are weightless in water.

Out of all the different types of robotic arms that have been designed by RE2 Robotics which one do you personally find the most impressive?

This is a difficult question for me to answer.  The fact that MDMS provides human performance in deep ocean water is very impressive, but the robotic arm we first developed for the Navy’s ground-based Advanced EOD Robotic System (AEODRS) program 10 years ago, was most impressive to me.  This arm had four degrees of freedom, a gripper, and an embedded manipulator controller, all packed into a 4-pound weather-proof, power-efficient solution that could lift more than its weight at full extension and move that weight at 120 degrees per second.  This was an engineering feat that moved the needle of what is possible.  As a result, since proving the concept with that first Navy arm, RE2’s robotic arms continue to feature incredible strength-to-weight ratios in a rugged package with built-in intelligence.

Currently most of your market involves military applications such as defusing mines. What are some other vertical markets that RE2 Robotics will be penetrating?

In 2018, RE2 broadened its reach and applied its intelligent mobile manipulation to other vertical markets to include aviation and medical.  Today, RE2 is 70% commercial and 30% defense, even though defense revenue has also grown since 2018.  Although we cannot publically disclose the specifics of our commercial work currently, at a high level, RE2 is applying it manipulation expertise for use in surgical robots as well as for the maintenance of aircraft.  RE2 is continuing to evaluate markets that need to automate in areas where only humans could previously go, to increase throughput, to keep human’s out of harm’s way, to improve quality, or to serve as a force multiplier.

Is there anything else that you would like to share about RE2 Robotics?

For nearly 20 years, RE2 has pushed the envelope of what is physically possible regarding robotic arms. RE2’s arms have been designed from the ground up to exhibit near-human performance in a compact form that can go where humans go. Today, by applying computer vision, machine learning and artificial intelligence to our technology, we continue to advance the boundaries of what is possible with mobile manipulation. Our computer-vision module, RE2 Detect, gives our systems the capability to perceive the world. RE2 Intellect, our AI-driven autonomy module, enables our systems to reason and interpret what they see, allowing them to autonomously manipulate objects in both indoor and outdoor environments.

Thank you for the great interview, I look forward to following your progress, readers who wish to learn more should visit RE2 Robotics.

Spread the love
Continue Reading