Connect with us

Robotics

Flexible Robot “Grows” Like a Plant

Published

 on

Flexible Robot “Grows” Like a Plant

Engineers from MIT have designed a robot that can extend a chain-like appendage. This makes the robot extremely flexible, and it can configure itself in multiple different ways. At the same time, it is strong enough to support heavy weight or apply torque, making it capable of assembling parts in small spaces. After completing its tasks, the robot is able to retract the appendage, and it can extend it again with a different length and shape. 

This newly developed robot can make a difference in areas like warehouses, where most of the robots are not able to put themselves in narrow spaces. The new plant-like robot can be used to grab products at the back of a shelf, and it can even move around a car’s engine parts to unscrew an oil cap. 

The design was inspired by plants and the way they grow. In that process, nutrients are transported to the plant’s tip as a fluid. Once they reach the tip, they are converted into solid material that produces, a little at a time, a supportive stem. 

The plant-like robot has a “growing point” or gearbox, which draws a loose chain of interlocking blocks into the box. Once there, gears lock the chain units together and release the chain, unit by unit, until it forms a rigid appendage. 

Team of Engineers

The new robot was presented this week at the IEEE International Conference on Intelligent Robots and Systems (IROS) in Macau. In the future, the engineers would like to add on grippers, cameras, and sensors that could be mounted onto the gearbox. This would allow the robot to tighten a loose screw after making its way through an aircraft’s propulsion system. It could also retrieve a product without disturbing anything in the near surroundings. 

Harry Asada is a professor of mechanical engineering at MIT.

“Think about changing the oil in your car,”  Asada says. “After you open the engine roof, you have to be flexible enough to make sharp turns, left and right, to get to the oil filter, and then you have to be strong enough to twist the oil filter cap to remove it.”

Tongxi Yan is a former graduate student in Asada’s lab, and he led the work.

“Now we have a robot that can potentially accomplish such tasks,” he says. “It can grow, retract, and grow again to a different shape, to adapt to its environment.”

The team of engineers also consisted of MIT graduate student Emily Kamienski and visiting scholar Seiichi Teshigawara.

Plant-Like Robot

After defining the different aspects of plant growth, the team looked to implement it into a robot. 

“The realization of the robot is totally different from a real plant, but it exhibits the same kind of functionality, at a certain abstract level,” Asada says.

The gearbox was designed to represent the robot’s “growing tip,” which is the equivalent of a bud of a plant. That is where most nutrients flow up to the site, and the tip builds a rigid stem. The box consists of a system of gears and motors, and they pull up a fluidized material. For this robot, it is a sequence of 3-D printed plastic units that are connected with each other. 

The robot is capable of being programmed to choose which units to lock together and which to leave unlocked. This allows it to form specific shapes and “grow” in specific directions.

“It can be locked in different places to be curved in different ways, and have a wide range of motions,” Yan says.

The chain is able to support a one-pound weight when locked and rigid. If a gripper were to be attached, the researchers believe it would be able to grow long enough to maneuver through a narrow space, and perform tasks such as unscrewing a cap.

 

Spread the love

Deep Learning Specialization on Coursera

Robotics

Researchers Training Plastic to Walk Under Light

Published

on

Researchers Training Plastic to Walk Under Light

Researchers in Finland are currently working on developing and “training” pieces of plastic to be commanded by light. This is the first time that a synthetic actuator, in this case thermoplastic,  is able to “learn” how to do a new action, in this case walking, based on its past experiences and not computer programming.

The plastics in this project are made from thermo-responsive liquid crystal polymer network and a coat of dye. They are soft actuators that are able to convert energy into mechanical motion. The actuator was at first only able to respond to heat, but that is changing since light can be associated with heat. Because of this, the plastic is able to respond to light. The actuator is somewhat flexible and bends itself in a similar way that a human bends its index finger. When the actuator has light projected onto it and therefore becomes heated, it “walks” similarly to an inchworm, and it moves at a speed of 1 mm/s, or the same pace as a snail. 

Arri Priimägi is a senior author of Tampere University.

“Our research is essentially asking the question if an inanimate material can somehow learn in a very simplistic sense,” he says. “My colleague, Professor Olli Ikkala from Aalto University, posed the question: Can materials learn, and what does it mean if materials would learn? We then joined forces in this research to make robots that would somehow learn new tricks.” 

Other members of the research team include postdoctoral researchers Hao Zeng, Tampere University, and Hang Zhang, Aalto University. 

There is also a conditioning process that associates light with heat, and it involves allowing the dye on the surface to diffuse throughout the actuator, which turns it blue. The overall light absorption is increased, and the photothermal effect is increased as well. The actuator’s temperature also raises, and it then bends upon irradiation. 

According to Priimägi, the team was inspired by another well-known experiment. 

“This study that we did was inspired by Pavlov’s dog experiment,” says Priimägi.

In that famous experiment, a dog salivated in response to seeing food, and Pavlov then rang the bell before giving the dog food. This was repeated a few times, and the dog eventually associated food with the bell and started salivating once he heard the bell. 

“If you think about our system, heat corresponds to the food, and the light would correspond to the bell in Pavlov’s experiment.”

“Many will say that we are pushing this analogy too far,” says Priimägi. “In some sense, those people are right because compared to biological systems, the material we studied is very simple and limited. But under the right circumstances, the analogy holds.”

The team will now increase the complexability and controllability of the systems, and this will help find certain limits of the analogies that can be drawn to biological systems. 

“We aim at asking questions which maybe allow us to look at inanimate materials from a new light.”

The systems can do more than just walk. They are able to “recognize” and respond to different wavelengths of light that correspond to the coating of its dye. Because of this, the material becomes a tunable soft micro-robot that is capable of being remotely controlled, which is extremely useful for biomedical applications. 

“I think there’s a lot of cool aspects there. These remotely controlled liquid crystal networks behave like small artificial muscles,” says Priimägi. “I hope and believe there are many ways that they can benefit the biomedical field, among other fields such as photonics, in the future.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Robotics

AI Project By F-Secure To Harness Potential of ‘Swarm Intelligence’

mm

Published

on

AI Project By F-Secure To Harness Potential of ‘Swarm Intelligence’

The cybersecurity company F-Secure has recently created a new AI project that utilizes techniques inspired by “swarm-intelligence”.  As AI News reports, F-Secure’s new AI approach makes use of many decentralized AI agents that all collaborate in order to carry out accomplish specific goals.

F-Secure’s new swarm AI is similar in concept to Fetch AI’s earlier take on decentralized AI systems, which have been applied to IoT concepts. However, unlike Fetch AI, F-Secure is aiming to take the concept of decentralized AI and use it in the cybersecurity domain. Specifically, F-Secure is aiming to improve the company’s detection and response capabilities.

As explained by Matti Aksela, the VP of AI at F-Secure, it is commonly believed that AI should aim to copy human intelligence. However, while patterning AI systems after human reasoning and behavior isn’t inherently bad, Aksela explained to AI-News that only patterning AI after human cognition is limiting what we can do with AI. Aksela explained that we can look outside of human cognition and explore other methods of organizing and architecturing AI. A wider range of possible models for AI can augment what people can already accomplish with AI.

Swarm intelligence is a behavior of decentralized systems. It’s a collective behavior that manifests itself in both artificial and natural systems. In terms of biological systems, swarm intelligence is often seen in large colonies of organisms like ants, bees, fish, and birds. For instance, many birds migrate in large flocks and as the flock travels it maintains a consistent formation that fluctuates very little, with the birds only deviating a few inches from one another in their formation. It is thought that flying in such formations reduces the energy that the birds require to fly.

Swarm intelligence has been used for probabilistic routing in telecommunication networks and in the creation of microbots. One example of this concept is the tiny robots created by MicroFactory. The robots are controlled by a circuit board that generates a magnetic field, and the robots themselves are magnets. The robots are also equipped with small manipulation tools that they can use to interact with the environment around them and manipulate objects.

The development of genuinely human-like artificial intelligence, or Artificial General Intelligence, will take some time to be created. Estimates by various AI experts vary, but on average it is thought that it will take around 50 years to succeed in the creation of an AGI. In contrast, the development of distributed autonomous agents like the ones F-Secure should take a significantly shorter time.

According to F-Secure, several years more years of development will be needed to for their distributed intelligence architecture to reach its full potential, but some mechanisms based on the swarm-intelligence model are already in use. F-Scale has used some swarm-intelligence techniques to detect breaches and engineer solutions.

F-Secure’s AI agents are capable of communicating with each other and collaborating.

Swarm intelligence techniques make use of the talents or capabilities of individual agents in the agent pool, and when these skills are networked together there is a robust and flexible system capable of carrying out complex tasks.

“Essentially, you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone,” Aksela explained.

In the specific case of F-Secure the different agents are capable of learning from different networks and hosts, and the agents can thread spread this knowledge through the wider network which joins together different organizations. F-Secure says one of the main benefits of this approach is that it can enable the organization to share sensitive information via the cloud and still remain protected due to superior break and attack detection.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Robotics

Scientists Developing Robotic Networks to Make Smart Satellites

Published

on

Scientists Developing Robotic Networks to Make Smart Satellites

Scientists are currently developing independent robotic networks that work together in order to create smart satellites. Those smart satellites could then be used to repair others in space. Currently, it is extremely difficult to do anything to broken satellites, which happens quite often. Because there is no real solution, the expensive satellites end up orbiting Earth for years until they are brought back into the atmosphere by gravity. 

Ou Ma, a professor from the University of Cincinnati, is engineering robotics technology to fix the orbiting satellites before they break. He runs the Intelligent Robotics and Autonomous Lab at the university, and he would like to create robotic satellites that are capable of docking with other satellites for repairs and refueling. 

The best repair satellite will be capable of performing multiple tasks, according to Ma. He has a long career involving various projects that deal with robotic arms on the International Space Station, as well as the former space shuttle program.

In the lab, Ma and UC senior research associate Anoop Sathyan are working on robotic networks that work independently and collaboratively on a common task. 

In their latest study, the pair used a group of robots and tested them with a novel game involving strings to move an attached token to a target spot on a table The robots each control one string, so they need the help of others in order to move the token to the right spot. To do this, they release or increase the tension on the string in response to each robot’s actions.

The team uses an artificial intelligence termed genetic fuzzy logic, and they were able to get the three robots, later five, to move the token to the desired spot. 

The results of the research and experiments were published in the journal Robotica this month. 

When the researchers used five different robots, they learned that the task can be completed even if one of them malfunctions. 

“This will be especially true for problems with larger numbers of robots where the liability of an individual robot will be low,” the researchers concluded.

According to Ma, every satellite launch has the possibility of countless problems, and it is almost always impossible to do anything about it once the satellite is deployed. 

Earlier this year, a $400 million Intelsat satellite, the same size as a small school bus, malfunctioned after reaching a high elliptical orbit. Some of the first 60 Starlink satellites launched by SpaceX also malfunctioned this year. In the case of SpaceX, the satellites were designed to orbit Earth at a low altitude, causing them to decay after a few years.

The most well-known of all took place in 1990 when the Hubble Space Telescope was deployed. NASA later learned that the mirror was warped, and a subsequent repair mission aboard the space shuttle Endeavor took place in 1993. That mission set out to replace the mirror, allowing images of the universe to make it back to Earth. 

Sending humans to space in order to repair satellites is extremely expensive, according to Ma. The missions can cost billions of dollars and are difficult to complete.

The issues become more prominent every time a satellite is launched. 

“Big commercial satellites are costly. They run out of fuel or malfunction or break down,” Ma said. “They would like to be able to go up there and fix it, but nowadays it’s impossible.”

NASA is looking to launch a satellite in 2022 that is capable of refueling others in low Earth orbit. They will set out to intercept and refuel a U.S. government satellite. The project is called Restore-L, and it is expected to be the proof of concept for autonomous satellite repairs, according to NASA.

Maxar, a company out of Colorado, will be responsible for the spacecraft infrastructure and robotic arms for the project. 

According to John Lymer, chief roboticist at Maxar, most satellites fail because they run out of fuel.  

“You’re retiring a perfectly good satellite because it ran out of gas.” he said. 

“Ou Ma, who I’ve worked with for many years, works on rendezvous and proximity organization. There are all kinds of technical solutions out there. Some will be better than others. It’s about getting operational experience to find out whose algorithms are better and what reduces operational risk the most.”

 

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading