Researchers within Professor Jamie Paik’s Laboratory at Ecole Polytechnique Fédérale de Lausanne have developed ant-like robots that bring a whole new aspect to AI. These 10-gram robot ants don’t have much physical intelligence as individuals, but when you put a group together, they are able to communicate and act as a collective unit. They have different locomotion modes, and each one is able to navigate on any type of surface. As a collective group, they are able to move objects that weigh a lot compared to their bodies; it is similar to a group of ants carrying a stick. As individuals, they act completely autonomous and are disconnected. Each ant robot consists of infrared and proximity sensors that are used to detect objects and communicate with each other. There is the possibility of adding more and different types of sensors than the ones they have now.
These small, three-legged ant robots are shaped like a T and named Tribots. Because of their small size and easy build, they are suitable for mass production. They consist of thin, multi-material sheets that are folded into a stack. Based on the real-life Odontomachus ants that have the trap-jaw which is used to jump between leaves, each one of these AI ants has five different traits. The different movements are vertical and horizontal jumping, somersaulting, walking on textured terrain, and moving on flat surfaces.
These robotic ants, whenever in a collective group, have distinct individual roles that include the explorer, the leader, and the worker. The explorers look for physical obstacles ahead, the leaders dictate the actions of the group, and the workers move objects. The ants are not tied or limited to one role; they can change whenever and in an instant.
This type of technology can be used in real-life scenarios like emergency search missions as well as the health sector. In theory, they could enter into the bloodstream and detect certain problems; they could also carry the medicine into those precise problem areas. Because of the relative easiness of mass production, they can be used in large numbers.
Another benefit is that they would be able to detect targets without having to use any type of GPS. Researcher Jamie Paik spoke on the possibilities of this technology.
“With their unique collective intelligence, our tiny robots can demonstrate better adaptability to unknown environments; therefore, for certain missions, they would outperform larger, more powerful robots.”
These robot ants are part of a new development within the AI world called Swarm Intelligence; think of ants, bees, wasps, and any other organisms that can work both autonomously and as a collective group. They will also be able to operate in our environment simultaneously with humans.
They are made up of sensors, software, and connectivity components that allow them to physically move, contain algorithms that help make intelligent decisions, and communicate with each other. These are a huge development in AI as they will be able to collect information while they interact with the environment and one another. This will continue to develop them and make them more useful in infrastructure, products, and services.
These swarms of robots have a shared common goal that they work towards while being autonomous. They are mostly self-sustainable in the sense that they can self-deploy, self-repair, and self-optimize. As a swarm, they are able to spread out the work between each other which allows for more efficiency and less communication disruptions.
Just like with any AI, these robot ants need to have some restrictions. There will have to be a system of overrides and human interventions in case they do not follow proper instructions. They will also be vulnerable to privacy threats, and with the ever increasing interconnectivity of machines and AI, it is a serious problem. Certain regulations and privacy controls will need to be established.
This new technology is just another aspect of the endless development that is taking place within the AI field. These will have a huge impact on our AI and what it can be used for.
AI Makes it Easier for Drones to Scan and Excavate Terrain
Researchers from Aarhus University (AU) and the Technical University of Denmark (DTU) have collaborated on a project that aims to decrease the costs of measuring and documenting gravel and limestone quarries, while at the same time being faster and easier than the traditional method.
The project included the use of artificial intelligence (AI), which took over the traditionally human-controlled drones that are currently relied on to complete the task.
Erdal Kayacan is an associate professor and expert in artificial intelligence and drones at the Department of Engineering at Aarhus University.
“We’ve made the entire process completely automatic. We tell the drone where to start, and the width of the wall or rock face we want to photograph, and then it flies zig-zag all the way along and lands automatically,” says Kayacan.
Limitations of Human-Controlled Drones
The current method of measuring and documenting gravel and limestone quarries, cliff faces, and other natural and human-made formations relies on drones to photograph the area. A computer then receives the recordings and automatically converts everything and creates a 3D terrain model.
One of the downsides of this method is that drone pilots cost a lot, and the measurements are time-consuming. In an excavation, the drone pilot has to make sure that the drone keeps a constant distance from the wall. At the same time, the drone camera has to be kept perpendicular to the wall, making it a complex and difficult task.
In order for the computer to convert and create a 3D figure out of the images, there has to be a specific overlap in the images. This is the main process that was automated by artificial intelligence, and it drastically reduced the complexity of completing the task.
“Our algorithm ensures that the drone always keeps the same distance to the wall and that the camera constantly repositions itself perpendicular to the wall. At the same time, our algorithm predicts the wind forces acting on the drone body,” says Kayacan.
AI Overcomes Wind Problem
The artificial intelligence also helps overcome the wind, which is one of the biggest challenges with autonomous drone flight.
Mohit Mehndiratta is a visiting Ph.D. student in the Department of Engineering at Aarhus University.
“The designed Gaussian process model also predicts the wind to be encountered in the near future. This implies that the drone can get ready and take the corrective actions beforehand,” says Mehndiratta.
When a human-controlled drone is completing this task, even a light breeze can alter the course of it. With the new technology, wind gusts and the overall wind speed can be accounted for.
“The drone doesn’t actually measure the wind, it estimates the wind on the basis of input it receives as it moves. This means that the drone responds to the force of the wind, just like when we human beings correct our movements when we are exposed to a strong wind,” says Kayacan.
The research was completed in collaboration with the Danish Hydrocarbon Research and Technology Centre at DTU, and the results of the project will be presented in May 2020 at the European Control Conference.
Researchers Create Soft Robot Able to Change Shape and Roam
One of the challenges surrounding soft robotics is that most of them are required to be attached to an air compressor or plugged into a wall. Researchers from Stanford set out to overcome this challenge.
Nathan Usevitch is a graduate student in mechanical engineering at Stanford.
“A significant limitation of most soft robots is that they have to be attached to a bulky air compressor or plugged into a wall, which prevents them from moving,” said Usevitch. “So, we wondered: What if we kept the same amount of air within the robot all the time?”
The team was able to develop a human-scale soft robot that is capable of changing its shape. By doing this, the soft robot can latch onto and handle objects, and it is able to roll in controllable directions.
The research was published in Science Robotics on March 18.
“The casual description of this robot that I give to people is Baymax from the movie Big Hero 6 mixed with Transformers. In other words, a soft, human-safe robot mixed with robots that can dramatically change their shape,” said Usevitch.
This soft robot was developed by combining three different types of robots. The simple version of the team’s invention is called an “isoperimetric robot,” since the shape changes while the total length of the edges and the amount of air inside stays the same.
The isoperimetric robot was developed out of soft robots, truss robots, and collective robots. Each category of robotics brought a different advantage: soft robots are lightweight and compliant, truss robots can change shape, and collective robots are small and collaborate.
Sean Follmer is an assistant professor of mechanical engineering and co-senior author of the paper.
“We’re basically manipulating a soft structure with traditional motors,” said Follmer. “It makes for a really interesting class of robots that combines many of the benefits of soft robots with all of the knowledge we have about more classic robots.”
The team also developed a more complex version of the robot by attaching several triangles together. They were able to coordinate the movements of the different motors, which allowed the robot to carry out desired behaviors, such as picking up a ball.
Elliot Hawkes is an assistant professor of mechanical engineering at the University of California, Santa Barabara and co-senior author of the paper.
“A key understanding we developed was that to create motion with a large, soft pneumatic robot, you don’t actually need to pump air in and out,” said Hawkes. “You can use the air you already have and just move it around with these simple motors; this method is more efficient and lets our robot move much more quickly.”
According to Zachary Hammond, a graduate student in mechanical engineering at Stanford and co-lead author of the paper, one of the possible uses for this soft robot is space exploration.
“This robot could be really useful for space exploration — especially because it can be transported in a small package and then operates untethered after it inflates,” said Hammond. “On another planet, it could use its shape-changing ability to traverse complicated environments, squeezing through tight spaces and spreading over obstacles.”
The researchers are now trying out different shapes, and they want to test the robot in water.
Allison Okamura is a professor of mechanical engineering and co-author of the paper.
“This research highlights the power of thinking about how to design and build robots in new ways,” said Okamura. “The creativity of robot design is expanding with this type of system and that’s something we’d really like to encourage in the robotics field.”
Marcio Macedo, Co-Founder of Ava Robotics – Interview Series
Marcio Macedo is Co-Founder and VP of Product and Marketing at Ava Robotics, a recent spin-off of iRobot that focuses on autonomous navigating robots for enterprise, commercial and industrial environments.
Having previously worked at iRobot, what were some of the interesting projects that you worked on?
At iRobot we were fortunate to be designing and pioneering applications of telepresence, including an FDA-certified telemedicine robot for intensive care environments and the Ava telepresence product in partnership with Cisco.
Ava Robotics is a spinoff of iRobot, what was the inspiration behind launching a new company instead of keeping it in the iRobot family?
With iRobot’s strategic focus shifting to home products, Ava Robotics spun off to operate independently and better address the needs of our nascent markets. As an independent company we gain more flexibility in meeting our customers’ needs while enjoying the support of technology developed originally at iRobot.
The Ava Telepresence robot can be remotely controlled by users and features autonomous technology to have the robot simply move itself to a designated area. Could you walk us through the machine learning that is used to have the robot navigate through an environment without bumping into new objects?
When an Ava is installed at a location it learns its operating environment and creates a realistic topology map of the site. This map can be further annotated to force specific behaviors, such as speed zones, keep-out zones, etc.
Ava has built-in obstacle detection and obstacle avoidance (ODOA) capabilities, which leverage multiple sensors in the robot body so that Ava will not bump into people or objects in its path. Furthermore, if the most direct path to its destination is blocked, the Ava will search for and navigate through an alternative path if one is available.
What are the navigation sensors that are used, is it reliant on LiDAR or regular cameras?
Ava’s robotic navigation technologies use a variety of sensors (3-D cameras, LiDAR, and IMU) and they are combined for all actions, such as localization, planning, collision avoidance, cliff detections, etc. We operate in medium- and large-size spaces, so we think LiDAR is a very valuable part of a sensing package for real-world commercial spaces.
The telepresence robot looks like it would be extremely useful in the hospitality sector. Could you walk us through some of these potential use-cases?
Visits to Executive Briefing Centers provide access to senior-level executives and deliver value in the form of hands-on briefings, strategy reviews, product demonstrations and opportunities for relationship building. Customer Experience Centers offer organizations the opportunity to wow customers and show off their latest products and services. But with so many busy schedules, getting the right people to attend is not always easy.
For meeting planners, Ava provides the ability to “walk” the hotel and visit the meeting spaces, conference rooms and ballrooms that are available for their conference or event. In this application, the property’s sales and marketing team gain a unique tool to accelerate their sales cycles.
When invitees and guests can’t get to the conference or event, Ava allows them to attend and move around as if they were there. Whether it’s a business meeting, conference exhibit hall, or social event, Ava provides an immersive experience with freedom to move around.
What are some of the use-cases that are being targeted in the corporate sector?
Businesses benefit from Ava in many ways. The robot allows freedom of movement and access to meetings, corporate training, factory inspections, manufacturing sites, labs and customer experience settings.
Natural, face-to-face, ad-hoc conversations are critical to moving a business forward. Yet today’s globally distributed businesses have employees telecommuting from home or from across the world, who miss these vital interactions. With Ava, you unlock the ability to bring everyone back together as if they’re sitting together in the office and can walk up and interact naturally.
Use Case examples include:
- Agile Product Development: Agile product development teams come together for scheduled and unscheduled meetings, looking to achieve high levels of collaboration and communication. When remote workers are part of the team, existing collaboration tools are challenged to meet the need. With Ava, remote team members can actively participate in stand-up meetings, sprint planning and demos, and project reviews as if they were co-located with the team.
- Manufacturing: In manufacturing, remote visits by management, collaboration between experts at headquarters and staff at the plant, and remote tours by customers or suppliers are frequent – and necessary – events. Ava increases collaboration between those on the design team or in engineering and those building and delivering the final product on the plant floor. Also, imagine that the manufacturing plant is experiencing a production-line problem, but the person who knows how to fix it is thousands of miles away. In such a case, the technician needs to freely walk to different parts of the manufacturing floor to meet with someone or see something. Ava can by delivering that critical physical presence right to the factory floor. Ava allows the remote person to immediately connect via the robot as if she was physically present, put eyes on the problem, and communicate with the local team on the floor. As a result, she can deliver immediate insight into the problem and quickly resolve the issue.
- Laboratories and Clean Rooms: Those who work in laboratories and clean rooms work hard to ensure they are kept sterile and clean. While necessary, this can be a time-consuming process for employees entering and leaving these spaces repeatedly during the day. Due to the risks of potential contamination, companies often limit tours by customers and other visitors. Ava brings people right into a laboratory or a clean room without compromising the space. With Ava, remote visitors can easily move around as if they were there in person, observing the work being done and speaking with employees.
Ava Robotics recently partnered with Qatar Airways to Introduce Smart Airport Technologies at QITCOM 2019. Could you share with us some details in regards to this event and how those in attendance reacted?
We have been fortunate to work with Hamad International Airport in Qatar and Qatar Airways via our strategic partner Cisco building applications for robots in airports for a variety of use cases. Showing our work at QITCOM 2019 was a good opportunity to expose to the IT community to the applications that are now possible through different verticals and industries.
Is there anything else that you would like to share about Ava Robotics?
In these times of challenges to global travel, we have seen increased demand for solutions like telepresence robotics. Customers are just beginning to realize the potential of applications that truly empower remote workers to collaborate as if they were physically present at a distant location.
- AI Powered State Surveillance On Rise, COVID-19 Used as Scapegoat
- Anastassia Loukina, Senior Research Scientist (NLP/Speech) at ETS – Interview Series
- How Governments Have Used AI to Fight COVID-19
- Neural Hardware and Image Recognition
- Charles J. Simon, Author, Will Computers Revolt? – Interview Series