Researchers in Finland are currently working on developing and “training” pieces of plastic to be commanded by light. This is the first time that a synthetic actuator, in this case thermoplastic, is able to “learn” how to do a new action, in this case walking, based on its past experiences and not computer programming.
The plastics in this project are made from thermo-responsive liquid crystal polymer network and a coat of dye. They are soft actuators that are able to convert energy into mechanical motion. The actuator was at first only able to respond to heat, but that is changing since light can be associated with heat. Because of this, the plastic is able to respond to light. The actuator is somewhat flexible and bends itself in a similar way that a human bends its index finger. When the actuator has light projected onto it and therefore becomes heated, it “walks” similarly to an inchworm, and it moves at a speed of 1 mm/s, or the same pace as a snail.
Arri Priimägi is a senior author of Tampere University.
“Our research is essentially asking the question if an inanimate material can somehow learn in a very simplistic sense,” he says. “My colleague, Professor Olli Ikkala from Aalto University, posed the question: Can materials learn, and what does it mean if materials would learn? We then joined forces in this research to make robots that would somehow learn new tricks.”
Other members of the research team include postdoctoral researchers Hao Zeng, Tampere University, and Hang Zhang, Aalto University.
There is also a conditioning process that associates light with heat, and it involves allowing the dye on the surface to diffuse throughout the actuator, which turns it blue. The overall light absorption is increased, and the photothermal effect is increased as well. The actuator’s temperature also raises, and it then bends upon irradiation.
According to Priimägi, the team was inspired by another well-known experiment.
“This study that we did was inspired by Pavlov’s dog experiment,” says Priimägi.
In that famous experiment, a dog salivated in response to seeing food, and Pavlov then rang the bell before giving the dog food. This was repeated a few times, and the dog eventually associated food with the bell and started salivating once he heard the bell.
“If you think about our system, heat corresponds to the food, and the light would correspond to the bell in Pavlov’s experiment.”
“Many will say that we are pushing this analogy too far,” says Priimägi. “In some sense, those people are right because compared to biological systems, the material we studied is very simple and limited. But under the right circumstances, the analogy holds.”
The team will now increase the complexability and controllability of the systems, and this will help find certain limits of the analogies that can be drawn to biological systems.
“We aim at asking questions which maybe allow us to look at inanimate materials from a new light.”
The systems can do more than just walk. They are able to “recognize” and respond to different wavelengths of light that correspond to the coating of its dye. Because of this, the material becomes a tunable soft micro-robot that is capable of being remotely controlled, which is extremely useful for biomedical applications.
“I think there’s a lot of cool aspects there. These remotely controlled liquid crystal networks behave like small artificial muscles,” says Priimägi. “I hope and believe there are many ways that they can benefit the biomedical field, among other fields such as photonics, in the future.”
AI Makes it Easier for Drones to Scan and Excavate Terrain
Researchers from Aarhus University (AU) and the Technical University of Denmark (DTU) have collaborated on a project that aims to decrease the costs of measuring and documenting gravel and limestone quarries, while at the same time being faster and easier than the traditional method.
The project included the use of artificial intelligence (AI), which took over the traditionally human-controlled drones that are currently relied on to complete the task.
Erdal Kayacan is an associate professor and expert in artificial intelligence and drones at the Department of Engineering at Aarhus University.
“We’ve made the entire process completely automatic. We tell the drone where to start, and the width of the wall or rock face we want to photograph, and then it flies zig-zag all the way along and lands automatically,” says Kayacan.
Limitations of Human-Controlled Drones
The current method of measuring and documenting gravel and limestone quarries, cliff faces, and other natural and human-made formations relies on drones to photograph the area. A computer then receives the recordings and automatically converts everything and creates a 3D terrain model.
One of the downsides of this method is that drone pilots cost a lot, and the measurements are time-consuming. In an excavation, the drone pilot has to make sure that the drone keeps a constant distance from the wall. At the same time, the drone camera has to be kept perpendicular to the wall, making it a complex and difficult task.
In order for the computer to convert and create a 3D figure out of the images, there has to be a specific overlap in the images. This is the main process that was automated by artificial intelligence, and it drastically reduced the complexity of completing the task.
“Our algorithm ensures that the drone always keeps the same distance to the wall and that the camera constantly repositions itself perpendicular to the wall. At the same time, our algorithm predicts the wind forces acting on the drone body,” says Kayacan.
AI Overcomes Wind Problem
The artificial intelligence also helps overcome the wind, which is one of the biggest challenges with autonomous drone flight.
Mohit Mehndiratta is a visiting Ph.D. student in the Department of Engineering at Aarhus University.
“The designed Gaussian process model also predicts the wind to be encountered in the near future. This implies that the drone can get ready and take the corrective actions beforehand,” says Mehndiratta.
When a human-controlled drone is completing this task, even a light breeze can alter the course of it. With the new technology, wind gusts and the overall wind speed can be accounted for.
“The drone doesn’t actually measure the wind, it estimates the wind on the basis of input it receives as it moves. This means that the drone responds to the force of the wind, just like when we human beings correct our movements when we are exposed to a strong wind,” says Kayacan.
The research was completed in collaboration with the Danish Hydrocarbon Research and Technology Centre at DTU, and the results of the project will be presented in May 2020 at the European Control Conference.
Researchers Create Soft Robot Able to Change Shape and Roam
One of the challenges surrounding soft robotics is that most of them are required to be attached to an air compressor or plugged into a wall. Researchers from Stanford set out to overcome this challenge.
Nathan Usevitch is a graduate student in mechanical engineering at Stanford.
“A significant limitation of most soft robots is that they have to be attached to a bulky air compressor or plugged into a wall, which prevents them from moving,” said Usevitch. “So, we wondered: What if we kept the same amount of air within the robot all the time?”
The team was able to develop a human-scale soft robot that is capable of changing its shape. By doing this, the soft robot can latch onto and handle objects, and it is able to roll in controllable directions.
The research was published in Science Robotics on March 18.
“The casual description of this robot that I give to people is Baymax from the movie Big Hero 6 mixed with Transformers. In other words, a soft, human-safe robot mixed with robots that can dramatically change their shape,” said Usevitch.
This soft robot was developed by combining three different types of robots. The simple version of the team’s invention is called an “isoperimetric robot,” since the shape changes while the total length of the edges and the amount of air inside stays the same.
The isoperimetric robot was developed out of soft robots, truss robots, and collective robots. Each category of robotics brought a different advantage: soft robots are lightweight and compliant, truss robots can change shape, and collective robots are small and collaborate.
Sean Follmer is an assistant professor of mechanical engineering and co-senior author of the paper.
“We’re basically manipulating a soft structure with traditional motors,” said Follmer. “It makes for a really interesting class of robots that combines many of the benefits of soft robots with all of the knowledge we have about more classic robots.”
The team also developed a more complex version of the robot by attaching several triangles together. They were able to coordinate the movements of the different motors, which allowed the robot to carry out desired behaviors, such as picking up a ball.
Elliot Hawkes is an assistant professor of mechanical engineering at the University of California, Santa Barabara and co-senior author of the paper.
“A key understanding we developed was that to create motion with a large, soft pneumatic robot, you don’t actually need to pump air in and out,” said Hawkes. “You can use the air you already have and just move it around with these simple motors; this method is more efficient and lets our robot move much more quickly.”
According to Zachary Hammond, a graduate student in mechanical engineering at Stanford and co-lead author of the paper, one of the possible uses for this soft robot is space exploration.
“This robot could be really useful for space exploration — especially because it can be transported in a small package and then operates untethered after it inflates,” said Hammond. “On another planet, it could use its shape-changing ability to traverse complicated environments, squeezing through tight spaces and spreading over obstacles.”
The researchers are now trying out different shapes, and they want to test the robot in water.
Allison Okamura is a professor of mechanical engineering and co-author of the paper.
“This research highlights the power of thinking about how to design and build robots in new ways,” said Okamura. “The creativity of robot design is expanding with this type of system and that’s something we’d really like to encourage in the robotics field.”
Marcio Macedo, Co-Founder of Ava Robotics – Interview Series
Marcio Macedo is Co-Founder and VP of Product and Marketing at Ava Robotics, a recent spin-off of iRobot that focuses on autonomous navigating robots for enterprise, commercial and industrial environments.
Having previously worked at iRobot, what were some of the interesting projects that you worked on?
At iRobot we were fortunate to be designing and pioneering applications of telepresence, including an FDA-certified telemedicine robot for intensive care environments and the Ava telepresence product in partnership with Cisco.
Ava Robotics is a spinoff of iRobot, what was the inspiration behind launching a new company instead of keeping it in the iRobot family?
With iRobot’s strategic focus shifting to home products, Ava Robotics spun off to operate independently and better address the needs of our nascent markets. As an independent company we gain more flexibility in meeting our customers’ needs while enjoying the support of technology developed originally at iRobot.
The Ava Telepresence robot can be remotely controlled by users and features autonomous technology to have the robot simply move itself to a designated area. Could you walk us through the machine learning that is used to have the robot navigate through an environment without bumping into new objects?
When an Ava is installed at a location it learns its operating environment and creates a realistic topology map of the site. This map can be further annotated to force specific behaviors, such as speed zones, keep-out zones, etc.
Ava has built-in obstacle detection and obstacle avoidance (ODOA) capabilities, which leverage multiple sensors in the robot body so that Ava will not bump into people or objects in its path. Furthermore, if the most direct path to its destination is blocked, the Ava will search for and navigate through an alternative path if one is available.
What are the navigation sensors that are used, is it reliant on LiDAR or regular cameras?
Ava’s robotic navigation technologies use a variety of sensors (3-D cameras, LiDAR, and IMU) and they are combined for all actions, such as localization, planning, collision avoidance, cliff detections, etc. We operate in medium- and large-size spaces, so we think LiDAR is a very valuable part of a sensing package for real-world commercial spaces.
The telepresence robot looks like it would be extremely useful in the hospitality sector. Could you walk us through some of these potential use-cases?
Visits to Executive Briefing Centers provide access to senior-level executives and deliver value in the form of hands-on briefings, strategy reviews, product demonstrations and opportunities for relationship building. Customer Experience Centers offer organizations the opportunity to wow customers and show off their latest products and services. But with so many busy schedules, getting the right people to attend is not always easy.
For meeting planners, Ava provides the ability to “walk” the hotel and visit the meeting spaces, conference rooms and ballrooms that are available for their conference or event. In this application, the property’s sales and marketing team gain a unique tool to accelerate their sales cycles.
When invitees and guests can’t get to the conference or event, Ava allows them to attend and move around as if they were there. Whether it’s a business meeting, conference exhibit hall, or social event, Ava provides an immersive experience with freedom to move around.
What are some of the use-cases that are being targeted in the corporate sector?
Businesses benefit from Ava in many ways. The robot allows freedom of movement and access to meetings, corporate training, factory inspections, manufacturing sites, labs and customer experience settings.
Natural, face-to-face, ad-hoc conversations are critical to moving a business forward. Yet today’s globally distributed businesses have employees telecommuting from home or from across the world, who miss these vital interactions. With Ava, you unlock the ability to bring everyone back together as if they’re sitting together in the office and can walk up and interact naturally.
Use Case examples include:
- Agile Product Development: Agile product development teams come together for scheduled and unscheduled meetings, looking to achieve high levels of collaboration and communication. When remote workers are part of the team, existing collaboration tools are challenged to meet the need. With Ava, remote team members can actively participate in stand-up meetings, sprint planning and demos, and project reviews as if they were co-located with the team.
- Manufacturing: In manufacturing, remote visits by management, collaboration between experts at headquarters and staff at the plant, and remote tours by customers or suppliers are frequent – and necessary – events. Ava increases collaboration between those on the design team or in engineering and those building and delivering the final product on the plant floor. Also, imagine that the manufacturing plant is experiencing a production-line problem, but the person who knows how to fix it is thousands of miles away. In such a case, the technician needs to freely walk to different parts of the manufacturing floor to meet with someone or see something. Ava can by delivering that critical physical presence right to the factory floor. Ava allows the remote person to immediately connect via the robot as if she was physically present, put eyes on the problem, and communicate with the local team on the floor. As a result, she can deliver immediate insight into the problem and quickly resolve the issue.
- Laboratories and Clean Rooms: Those who work in laboratories and clean rooms work hard to ensure they are kept sterile and clean. While necessary, this can be a time-consuming process for employees entering and leaving these spaces repeatedly during the day. Due to the risks of potential contamination, companies often limit tours by customers and other visitors. Ava brings people right into a laboratory or a clean room without compromising the space. With Ava, remote visitors can easily move around as if they were there in person, observing the work being done and speaking with employees.
Ava Robotics recently partnered with Qatar Airways to Introduce Smart Airport Technologies at QITCOM 2019. Could you share with us some details in regards to this event and how those in attendance reacted?
We have been fortunate to work with Hamad International Airport in Qatar and Qatar Airways via our strategic partner Cisco building applications for robots in airports for a variety of use cases. Showing our work at QITCOM 2019 was a good opportunity to expose to the IT community to the applications that are now possible through different verticals and industries.
Is there anything else that you would like to share about Ava Robotics?
In these times of challenges to global travel, we have seen increased demand for solutions like telepresence robotics. Customers are just beginning to realize the potential of applications that truly empower remote workers to collaborate as if they were physically present at a distant location.
- AI Powered State Surveillance On Rise, COVID-19 Used as Scapegoat
- Anastassia Loukina, Senior Research Scientist (NLP/Speech) at ETS – Interview Series
- How Governments Have Used AI to Fight COVID-19
- Neural Hardware and Image Recognition
- Charles J. Simon, Author, Will Computers Revolt? – Interview Series