Scientists are currently developing independent robotic networks that work together in order to create smart satellites. Those smart satellites could then be used to repair others in space. Currently, it is extremely difficult to do anything to broken satellites, which happens quite often. Because there is no real solution, the expensive satellites end up orbiting Earth for years until they are brought back into the atmosphere by gravity.
Ou Ma, a professor from the University of Cincinnati, is engineering robotics technology to fix the orbiting satellites before they break. He runs the Intelligent Robotics and Autonomous Lab at the university, and he would like to create robotic satellites that are capable of docking with other satellites for repairs and refueling.
The best repair satellite will be capable of performing multiple tasks, according to Ma. He has a long career involving various projects that deal with robotic arms on the International Space Station, as well as the former space shuttle program.
In the lab, Ma and UC senior research associate Anoop Sathyan are working on robotic networks that work independently and collaboratively on a common task.
In their latest study, the pair used a group of robots and tested them with a novel game involving strings to move an attached token to a target spot on a table The robots each control one string, so they need the help of others in order to move the token to the right spot. To do this, they release or increase the tension on the string in response to each robot’s actions.
The team uses an artificial intelligence termed genetic fuzzy logic, and they were able to get the three robots, later five, to move the token to the desired spot.
The results of the research and experiments were published in the journal Robotica this month.
When the researchers used five different robots, they learned that the task can be completed even if one of them malfunctions.
“This will be especially true for problems with larger numbers of robots where the liability of an individual robot will be low,” the researchers concluded.
According to Ma, every satellite launch has the possibility of countless problems, and it is almost always impossible to do anything about it once the satellite is deployed.
Earlier this year, a $400 million Intelsat satellite, the same size as a small school bus, malfunctioned after reaching a high elliptical orbit. Some of the first 60 Starlink satellites launched by SpaceX also malfunctioned this year. In the case of SpaceX, the satellites were designed to orbit Earth at a low altitude, causing them to decay after a few years.
The most well-known of all took place in 1990 when the Hubble Space Telescope was deployed. NASA later learned that the mirror was warped, and a subsequent repair mission aboard the space shuttle Endeavor took place in 1993. That mission set out to replace the mirror, allowing images of the universe to make it back to Earth.
Sending humans to space in order to repair satellites is extremely expensive, according to Ma. The missions can cost billions of dollars and are difficult to complete.
The issues become more prominent every time a satellite is launched.
“Big commercial satellites are costly. They run out of fuel or malfunction or break down,” Ma said. “They would like to be able to go up there and fix it, but nowadays it’s impossible.”
NASA is looking to launch a satellite in 2022 that is capable of refueling others in low Earth orbit. They will set out to intercept and refuel a U.S. government satellite. The project is called Restore-L, and it is expected to be the proof of concept for autonomous satellite repairs, according to NASA.
Maxar, a company out of Colorado, will be responsible for the spacecraft infrastructure and robotic arms for the project.
According to John Lymer, chief roboticist at Maxar, most satellites fail because they run out of fuel.
“You’re retiring a perfectly good satellite because it ran out of gas.” he said.
“Ou Ma, who I’ve worked with for many years, works on rendezvous and proximity organization. There are all kinds of technical solutions out there. Some will be better than others. It’s about getting operational experience to find out whose algorithms are better and what reduces operational risk the most.”
Researchers Training Plastic to Walk Under Light
Researchers in Finland are currently working on developing and “training” pieces of plastic to be commanded by light. This is the first time that a synthetic actuator, in this case thermoplastic, is able to “learn” how to do a new action, in this case walking, based on its past experiences and not computer programming.
The plastics in this project are made from thermo-responsive liquid crystal polymer network and a coat of dye. They are soft actuators that are able to convert energy into mechanical motion. The actuator was at first only able to respond to heat, but that is changing since light can be associated with heat. Because of this, the plastic is able to respond to light. The actuator is somewhat flexible and bends itself in a similar way that a human bends its index finger. When the actuator has light projected onto it and therefore becomes heated, it “walks” similarly to an inchworm, and it moves at a speed of 1 mm/s, or the same pace as a snail.
Arri Priimägi is a senior author of Tampere University.
“Our research is essentially asking the question if an inanimate material can somehow learn in a very simplistic sense,” he says. “My colleague, Professor Olli Ikkala from Aalto University, posed the question: Can materials learn, and what does it mean if materials would learn? We then joined forces in this research to make robots that would somehow learn new tricks.”
Other members of the research team include postdoctoral researchers Hao Zeng, Tampere University, and Hang Zhang, Aalto University.
There is also a conditioning process that associates light with heat, and it involves allowing the dye on the surface to diffuse throughout the actuator, which turns it blue. The overall light absorption is increased, and the photothermal effect is increased as well. The actuator’s temperature also raises, and it then bends upon irradiation.
According to Priimägi, the team was inspired by another well-known experiment.
“This study that we did was inspired by Pavlov’s dog experiment,” says Priimägi.
In that famous experiment, a dog salivated in response to seeing food, and Pavlov then rang the bell before giving the dog food. This was repeated a few times, and the dog eventually associated food with the bell and started salivating once he heard the bell.
“If you think about our system, heat corresponds to the food, and the light would correspond to the bell in Pavlov’s experiment.”
“Many will say that we are pushing this analogy too far,” says Priimägi. “In some sense, those people are right because compared to biological systems, the material we studied is very simple and limited. But under the right circumstances, the analogy holds.”
The team will now increase the complexability and controllability of the systems, and this will help find certain limits of the analogies that can be drawn to biological systems.
“We aim at asking questions which maybe allow us to look at inanimate materials from a new light.”
The systems can do more than just walk. They are able to “recognize” and respond to different wavelengths of light that correspond to the coating of its dye. Because of this, the material becomes a tunable soft micro-robot that is capable of being remotely controlled, which is extremely useful for biomedical applications.
“I think there’s a lot of cool aspects there. These remotely controlled liquid crystal networks behave like small artificial muscles,” says Priimägi. “I hope and believe there are many ways that they can benefit the biomedical field, among other fields such as photonics, in the future.”
AI Project By F-Secure To Harness Potential of ‘Swarm Intelligence’
The cybersecurity company F-Secure has recently created a new AI project that utilizes techniques inspired by “swarm-intelligence”. As AI News reports, F-Secure’s new AI approach makes use of many decentralized AI agents that all collaborate in order to carry out accomplish specific goals.
F-Secure’s new swarm AI is similar in concept to Fetch AI’s earlier take on decentralized AI systems, which have been applied to IoT concepts. However, unlike Fetch AI, F-Secure is aiming to take the concept of decentralized AI and use it in the cybersecurity domain. Specifically, F-Secure is aiming to improve the company’s detection and response capabilities.
As explained by Matti Aksela, the VP of AI at F-Secure, it is commonly believed that AI should aim to copy human intelligence. However, while patterning AI systems after human reasoning and behavior isn’t inherently bad, Aksela explained to AI-News that only patterning AI after human cognition is limiting what we can do with AI. Aksela explained that we can look outside of human cognition and explore other methods of organizing and architecturing AI. A wider range of possible models for AI can augment what people can already accomplish with AI.
Swarm intelligence is a behavior of decentralized systems. It’s a collective behavior that manifests itself in both artificial and natural systems. In terms of biological systems, swarm intelligence is often seen in large colonies of organisms like ants, bees, fish, and birds. For instance, many birds migrate in large flocks and as the flock travels it maintains a consistent formation that fluctuates very little, with the birds only deviating a few inches from one another in their formation. It is thought that flying in such formations reduces the energy that the birds require to fly.
Swarm intelligence has been used for probabilistic routing in telecommunication networks and in the creation of microbots. One example of this concept is the tiny robots created by MicroFactory. The robots are controlled by a circuit board that generates a magnetic field, and the robots themselves are magnets. The robots are also equipped with small manipulation tools that they can use to interact with the environment around them and manipulate objects.
The development of genuinely human-like artificial intelligence, or Artificial General Intelligence, will take some time to be created. Estimates by various AI experts vary, but on average it is thought that it will take around 50 years to succeed in the creation of an AGI. In contrast, the development of distributed autonomous agents like the ones F-Secure should take a significantly shorter time.
According to F-Secure, several years more years of development will be needed to for their distributed intelligence architecture to reach its full potential, but some mechanisms based on the swarm-intelligence model are already in use. F-Scale has used some swarm-intelligence techniques to detect breaches and engineer solutions.
F-Secure’s AI agents are capable of communicating with each other and collaborating.
Swarm intelligence techniques make use of the talents or capabilities of individual agents in the agent pool, and when these skills are networked together there is a robust and flexible system capable of carrying out complex tasks.
“Essentially, you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone,” Aksela explained.
In the specific case of F-Secure the different agents are capable of learning from different networks and hosts, and the agents can thread spread this knowledge through the wider network which joins together different organizations. F-Secure says one of the main benefits of this approach is that it can enable the organization to share sensitive information via the cloud and still remain protected due to superior break and attack detection.
Researchers Develop New Method for Controlling Soft Robots
Researchers from the Massachusetts Institute of Technology have figured out a way to better control and design soft robots to perform target tasks. This has been a goal in soft-robotics for a long time, and it is a big accomplishment.
Soft robots have flexible bodies that are capable of moving in an infinite number of ways at any given time. In regards to computation, this is a highly complex “state representation,” describing the movements of each part of the robot. These can possibly have millions of dimensions, which means that it is more difficult to calculate the best way for a robot to complete complex target tasks.
The MIT researchers will present a model at the Conference on Neural Information Processing Systems in December. The model is able to learn a compact, or “low-dimensional” state representation that is based on the physics of the robot, the environment, and other factors. The model is then able to co-optimize movement control as well as material design parameters, These are then aimed at specific tasks.
Andrew Spielberg is a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
“Soft robots are infinite-dimensional creatures that bend in a billion different ways at any given moment, but in truth, there are natural ways soft objects are likely to bend. We find the natural states of soft robots can be described very compactly in a low-dimensional description. We optimize control and design of soft robots by learning a good description of the likely states.”
In the simulations that took place, the model enabled 2D and 3D soft robots to complete the target tasks. The tasks included moving different distances and reaching target spots. The model was able to do these faster and more accurately than other current methods. The researchers now want to use the model in real soft robots.
Other individuals who worked on the project include CSAIL graduate students, Allan Zhao, Tao Du, and Yuanming Hu; Daniel Rus, director of CSAIl and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.
Soft-robotics is a growing field that is extremely important within the larger scope of advanced robotics. Characteristics such as flexible bodies could play a role in safer interaction with humans, object manipulation, maneuverability, and much more.
During the simulations, an “observer” is responsible for the control of the robots. An “observer” is a program that computes variables that see the way the soft robot is moving in order to complete a task.
Eventually, the researchers developed a new “learning-in-the-loop optimization” method. All optimized parameters are learned during a single feedback loop that takes place over multiple simulations. At the same time, the method learns the state representation.
The model uses a technique called a “material point method (MPM). An MPM simulates the behavior of particles of continuum materials, like foams and liquids, and it is surrounded by a background grid. The technique is able to capture the particles of the robot and its observable environment into 3D pixels, or voxels.
The raw particle grid information is then sent to a machine-learning component. It learns to input an image, compress it to a low-dimensional representation, and then decompress it back into the input image.
The learned compressed representation acts as the robot’s low-dimensional state representation. The compressed representations loops back into the controller in an optimization phase, and it outputs a calculated action for how each particle should then move in the next MPM-stimulated step.
At the same time, the controller uses the information to adjust the optimal stiffness of each particle. The material information could be used for 3D-printing soft robots, since each particle spot can be printed with different stiffness.
“This allows for creating robot designs catered to the robot motions that will be relevant to specific tasks,” Spielberg says. “By learning these parameters together, you keep everything as synchronized as much as possible to make that design process easier.”
The researchers hope that they will eventually be able design from simulation to fabrication.