Engineers at MIT are working toward giving robots the ability to follow high-level commands, such as going to another room to retrieve an item for an individual. In order for this to be possible, robots will need to have the ability to perceive their physical environments similar to the way we humans do.
Luca Carlone is an assistant professor of aeronautics and astronautics at MIT.
“In order to make any decision in the world, you need to have a mental model of the environment around you,” Carlone says. “This is something so effortless for humans. But for robots it’s a painfully hard problem, where it’s about transforming pixel values that they see through a camera, into an understanding of the world.”
To take on this challenge, the researchers modeled a representation of spatial perception for robots based on how humans perceive and navigate their physical environments.
3D Dynamic Scene Graphs
The new model is called 3D Dynamic Scene Graphs, and it enables a robot to generate a 3D map of its physical surroundings, including objects and their semantic labels. The robot can also map out people, rooms, walls, and other structures in the environment.
The model then allows the robot to extract information from the 3D map, information that can be used to locate objects, rooms, and the movement of people.
“This compressed representation of the environment is useful because it allows our robot to quickly make decisions and plan its path,” Carlone says. “This is not too far from what we do as humans. If you need to plan a path from your home to MIT, you don’t plan every single position you need to take. You just think at the level of streets and landmarks, which helps you plan your route faster.”
According to Carlone, robots that rely on this model would be able to do much more than just domestic tasks. They could also be used for high-level skills and work alongside people in factories, or help locate survivors of a disaster site.
Current Methods vs New Model
The current methods for robotic vision and navigation mainly focus on 3D mapping that allows robots to reconstruct their environment in three dimensions in real-time, or semantic segmentation, which happens when robots classify features in the environment as semantic objects, like a car versus a bicycle. Semantic segmentation is often done on 2D images.
The newly developed model of spatial perception is the first of its kind to generate a 3D map of the environment in real-time and label objects, people, and structures within the 3D map at the same time.
In order to achieve this new model, the researchers relied on Kimera, an open-source library. Kimera was previously developed by the same team to construct a 3D geometric model of an environment, while at the same time encoding what the object likely is, such as a chair versus a desk.
“Like the mythical creature that is a mix of different animals, we wanted Kimera to be a mix of mapping and semantic understanding in 3D,” Carlone says.
Kimera used images from a robot’s camera and inertial measurements from onboard sensors to reconstruct the scene as a 3D mesh in real-time. In order to do this, Kimera utilized a neural network that has been trained on millions of real-world images. It could then predict the label of each pixel and use ray-casting to project them in 3D.
Through the use of this technique, the robot’s environment can be mapped out in a three-dimensional mesh where each face is color-coded, identifying it as a part of objects, structures, or people in the environment.
3D Mesh to 3D Dynamic “Scene Graphs”
Because the 3D semantic mesh model requires a lot of computational power and is time-consuming, the researchers used Kimera to develop algorithms that resulted in 3D dynamic “scene graphs.”
The 3D semantic mesh gets broken down into distinct semantic layers, and the robot is then able to view a scene through a layer. The layers go from objects and people, to open spaces and structures, to rooms, corridors, halls, and whole buildings.
This layering method allows the robot to narrow its focus rather than having to analyze billions of points and faces. This layering method also allows the algorithms to track humans and their movement within the environment in real-time.
The new model was tested in a photo-realistic simulator that simulates a robot navigating an office environment with moving people.
“We are essentially enabling robots to have mental models similar to the ones humans use,” Carlone says. “This can impact many applications, including self-driving cars, search and rescue, collaborative manufacturing, and domestic robotics.
Carlone was joined by lead author and MIT graduate student Antoni Rosinol.
“Our approach has just been made possible thanks to recent advances in deep learning and decades of research on simultaneous localization and mapping,” Rosinol says. “With this work, we are making the leap toward a new era of robotic perception called spatial-AI, which is just in its infancy but has great potential in robotics and large-scale virtual and augmented reality.”
The research was presented at the Robotics: Science and Systems virtual conference.
Researchers Develop Self-Healing Soft Robot Actuators
A team of researchers at Penn State University has developed a solution to the wear on soft robotic actuators due to repeated activity: a self-healing, biosynthetic polymer based on squid ring teeth. The material is beneficial to actuators, but it could also be applied anywhere that tiny holes could cause problems, such as hazmat suits.
According to the report in Nature Materials, “Current self-healing materials have shortcomings that limit their practical application, such as low healing strength and long healing times (hours).”
Drawing inspiration from self-healing creatures in nature, the researchers created high-strength synthetic proteins. They are able to self-heal minute and visible damage.
Melik Demirel is a professor of engineering science and mechanics and the holder of the Lloyd and Dorothy Foehr Huch Chair in Biomimetic Materials.
“Our goal is to create self-healing programmable materials with unprecedented control over their physical properties using synthetic biology,” he said.
Robotic Arms and Prosthetics
Some robotic machines, such as robotic arms and prosthetic legs, rely on joints that are constantly moving. This requires a soft material, and the same is true for ventilators and various types of personal protective equipment. These materials, and any that undergo continual repetitive motion, are at risk of developing small tears and cracks, eventually breaking. WIth the use of self-healing material, these tiny tears can be quickly repaired before any serious damage is done.
DNA Tandem Repeats
The team of researchers created the self-healing polymer by using a series of DNA tandem repeats consisting of amino acids produced by gene duplication. Tandem repeats are often a short series of molecules that can repeat themselves an unlimited number of times.
Abdon Pena-Francelsch is lead author of the paper and a former doctoral student in Demirel’s lab.
“We were able to reduce a typical 24-hour healing period to one second so our protein-based soft robots can now repair themselves immediately,” Abdon Pena-Francelsch said. “In nature, self-healing takes a long time. In this sense, our technology outsmarts nature.”
According to Demirel, the self-healing polymer can heal itself with the application of water, heat, and even light.
“If you cut this polymer in half, when it heals it gains back 100 percent of its strength,” Demirel said.
Metin Sitti is direcor of the Physical Intelligence Department at the Max Planck Instiute for Intelligent Systems, Stuttgart, Germany.
“Self-repairing physically intelligent soft materials are essential for building robust and fault-tolerant soft robots and actuators in the near future,” Sitti said.
The team was able to create the rapidly-healing soft polymer by adjusting the number of tandem repeats. It is able to retain its original strength, and at the same time, they were able to make the polymer 100% biodegradable and 100% recyclable into the same polymer.
“We want to minimize the use of petroleum-based polymers for many reasons,” Demirel said. “Sooner or later we will run out of petroleum and it is also polluting and causing global warming. We can’t compete with the really inexpensive plastics. The only way to compete is to supply something the petroleum based polymers can’t deliver and self-healing provides the performance needed.”
According to Demirel, many of the petroleum-based polymers are able to be recylced, but it has to be into something different.
The biomimetic polymers are able to biodegrade, and acids like vinegar are able to recycle it into a powder which can then be manufactured into the original self-healing polymer.
Stephanie McElhinny is a biochemistry program manager at the Army Research Office.
“This research illuminates the landscape of material properties that become accessible by going beyond proteins that exist in nature using synthetic biology approaches, McElhinny said. “The rapid and high-strength self-healing of these synthetic proteins demonstrates the potential of this approach to deliver novel materials for future Army applications, such as personal protective equipment or flexible robots that could maneuver in confined spaces.”
Adam Rodnitzky, COO & Co-Founder of Tangram Robotics – Interview Series
Adam Rodnitzky, is the COO & Co-Founder of Tangram Robotics, a company specializing in assisting robotic companies to integrate sensors quickly and maximize uptime.
What initially attracted you to Robotics?
I’ve always loved mechanical things, and I’ve always loved cutting-edge technology. Robots sit right at the intersection of those two interests. Beyond that foundation of what they are, however, is what they can do. For the longest time, robots were largely relegated to factory settings, where they worked under relatively constrained circumstances. That meant that for most, robots were something they knew about, but never experienced. It’s only been recently that robots have started to play a larger role in society, and that is largely because the technology required to let them operate safely and consistently in the human world is just now becoming viable. The future of robotics is being built as we speak, and the level of interaction between them and humans is going to grow exponentially in the next decade. I’m very excited to witness that.
You were a mentor at StartX a seed stage accelerator out of Stanford University for over a decade. What did you learn from this experience?
Being a company founder comes with a lot of uncertainty, as you face new challenges you’ve never faced, and try to pattern match on prior experience to make sense of the day-to-day realities of running a new company. Looking to mentors for guidance is a natural response to having that uncertainty. But there is a challenge in taking advice from mentors. Mentors will prescribe advice based on their own past experiences. Yet those experiences occurred in different contexts, at different company stages and for different reasons. As a mentor, you’ve got to remember this when giving advice. You may have the best intentions, but you might lead a company astray by not properly contextualizing advice based on past experience. I’ve tried to keep this in mind as I mentor companies at StartX.
You previously worked as a General Manager for Occipital which develops state-of-the-art mobile computer vision applications and hardware. Could you tell us what this role involved in a day to day setting?
When I was at Occipital, our core product was the Structure Sensor and SDK, which made it simple to add 3D sensing to mobile devices, and develop applications to take advantage of that 3D data stream. On a day-to-day basis, I saw my role as combining a short-term tactical and long-term strategic pursuit of revenue and revenue growth. For instance, the SDK was free, and therefore it generated no revenue on a daily basis. However, as developers used the SDK to create apps to use Structure Sensor, there was a direct relationship between the number of apps published on our platform and the rate of sensor sales. So on a daily basis, I’d pursue these indirect revenue opportunities around developer community support, while also setting up programs to sell our sensors in as many channels as possible – including directly through those developers.
When did you first get the idea to launch a robotics startup?
Much of the credit here goes to my co-founder, Brandon Minor. Brandon is a co-founder of Colorado Robotics, and has had his finger on the pulse of the robotics community as long as I have known him. We had both left Occipital independently with the idea of starting companies. Earlier this year, we met up and he proposed that we join forces to build on our past experience with robots, computer vision and sensors. And that is how Tangram Robotics was created.
Could you tell us what Tangram Robotics does?
Tangram Robotics offers sensors-as-a-service to robotics platforms. All robots need perception sensors, but not all of those sensors meet the performance needs of robotics. We infuse trusted hardware with Tangram software that makes integration, calibration, and maintenance a breeze during development and deployment. This means that roboticists don’t need to make any trade-offs; they can start using the best sensors for their platform from day one, and keep that momentum as they deploy.
What are some of the existing challenges companies face when it comes to the integration of Robotic Perception Sensors?
Our interviews with robotics companies of all types have led us to the conclusion that hardware companies make great hardware, but marginal software. The process of developing the right streaming and integration software for a sensor therefore falls to the robotics company themselves and can take months to get right. Not only that, but every robotics company is going through this same process, for the same sensors, over and over as they develop up their perception stack. This results in a major loss of engineering time and customer revenue. We’ve set up our solution so that it can help robotics companies at any stage, from design through development and ultimately to deployment.
Could you discuss Tangram Robotics web-based diagnostics and monitoring systems?
Tangram understands that the key to improvement is in metrics, both during development and in the field. With that in mind, we are creating remote diagnostics systems that work on top of our integration software that allow robotics developers to better understand what’s happening during operation. This includes data transmission rates, processing time, and metrics directly related to other aspects of our platform. Setting this up over a web portal means that decisions can be made competently without needing the physical presence of an engineer.
One of the solutions Tangram Robotics is working on is developing full-stack tools for robotic companies to add to their project. Could you discuss the vision behind these tools?
Sensor integration is much more than streaming. We look at sensors from a holistic perspective, focusing on the tools needed to develop faster and work longer. This includes competent calibration tools that work in the field, as well as diagnostics and monitoring of data and performance. By solving the base requirements of many robot platforms out-of-the-box, Tangram’s tools dramatically improve time-to-market. We anticipate that various other tools will be requested as our platform matures.
Is there anything else that you would like to share about Tangram Robotics?
As we’ve gone through the process of talking with roboticists, we’ve been blown away at the diversity of applications that robotics companies are pursuing. We’ve spoken to companies building all sorts of wild solutions, from strawberry pickers to sous chefs to boat captains to groundskeepers!
Thank you for the interview. I believe that sensors is often something that is overlooked by different companies and I look forward to following your progress. Readers who wish to learn more should visit Tangram Robotics.
Tiny Robotic Cameras Give First-Person View of Insects
Many people throughout generations have been curious about the viewpoints of insects and small organisms, which are often portrayed in movies. However, this has never been able to be demonstrated in real-life, up until now.
Researchers at the University of Washington have created a wireless steerable camera that is capable of being placed on the back of an insect, bringing that viewpoint to the world.
The camera on the back of the insect can stream video to a smartphone at 1 to 5 frames per second, and it is placed on a mechanical arm that allows a 60-degree pivot. The technology provides high-resolution, panoramic shots, as well as the possibility of tracking moving objects.
The entire system weighs around 250 milligrams, and it was demonstrated on the back of live beetles and insect-sized robots.
The work was published on July 15 in Science Robotics.
Shyam Golakota is the senior author and a UW associate professor in the Paul G. Allen School of Computer Science & Engineering.
“We have created a low-power, low-weight, wireless camera system that can capture a first-person view of what’s happening from an actual live insect or create vision for small robots” said Golakota. “Vision is so important for communication and for navigation, but it’s extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects.”
There are a few reasons why the researchers had to come up with a new camera rather than use the small ones that are currently present in smartphones. Those currently used are considered lightweight, but the batteries that are required would make them too heavy to be placed on the back of insects.
Sawyer Fuller is co-author and a UW assistant professor of mechanical engineering.
“Similar to cameras, vision in animals requires a lot of power,” Fuller said. “It’s less of a big deal in larger creatures like humans, but flies are using 10 to 20% of their resting energy just to power their brains, most of which is devoted to visual processing. To help cut the cost, some flies have a small, high-resolution region of their compound eyes. They turn their heads to steer where they want to see with extra clarity, such as for chasing prey or a mate. This saves power over having high resolution over their entire visual field.”
Modeled After Nature
The newly developed camera was inspired by nature, and the researchers used an ultra-low-power black-and-white camera to mimic an animal’s vision. The camera can move across a field of view with the help of the mechanical arm, which is controlled by the team applying a high voltage, causing the arm to bend and move the camera.
The camera and the arm are able to be controlled via Bluetooth from a smartphone up to 120 meters away.
Testing the Camera
The researchers tested the camera on two different types of beetles, which ended up living for at least a year following the experiment.
“We made sure the beetles could still move properly when they were carrying our system,” said Ali Najafi, co-lead author and UW doctoral student in electrical and computer engineering. “They were able to navigate freely across gravel, up a slope and even climb trees.”
“We added a small accelerometer to our system to be able to detect when the beetle moves. Then it only captures images during that time,” Iyer said. “If the camera is just continuously streaming without this accelerometer, we could record one to two hours before the battery died. With the accelerometer, we could record for six hours or more, depending on the beetle’s activity level.”
According to the researchers, this technology could be applied in the areas of biology and exploration, and they hope for future versions to be solar-powered. However, the team does recognize certain privacy concerns could arise due to the technology.
“As researchers we strongly believe that it’s really important to put things in the public domain so people are aware of the risks and so people can start coming up with solutions to address them,” Gollakota said.
- Andrea Sommer, Founder & Business Lead at UvvaLabs – Interview Series
- Three Uses Of Automation Within Supply Chain 4.0
- Jean Belanger, Co-Founder & CEO at Cerebri AI – Interview Series
- Researchers Develop Self-Healing Soft Robot Actuators
- Researchers Design AI Model Capable of Distinguishing Different Odor Percepts