Advances in artificial intelligence have created bots and machines that can potentially pass as humans if they interact with people exclusively through a digital medium. Recently, a team of computer science researchers have studied how robots/machines and humans interact when the humans believe that the robots are also human. As reported by ScienceDaily, the results of the study found that people find robots/chatbots more persuasive when they believe the bots are human.
Talal Rahwan, the associate professor of Computer Science at NYU Abu Dhabi, has recently led a study that examined how robots and humans interact with each other. The results of the experiment were published in Nature Machine Intelligence in a report called Transparency-Efficiency Tradeoff in Human-Machine Cooperation. During the course of the study, test subjects were instructed to play a cooperative game with a partner, and the partner may be either a human or a bot.
The game was a twist on the classic Prisoner’s Dilemma, where participants must decide whether or not to cooperate or betray the other on every round. In a prisoner’s dilemma, one side may choose to defect and betray their partner to achieve a benefit at cost to the other player, and only by cooperating can both sides assure themselves of gain.
The researchers manipulated their test subjects by providing them with either correct or incorrect information about the identity of their partner. Some of the participants were told that they were playing with a bot, even though their partner was actually human. Other participants were in the inverse situation. Over the course of the experiment, the research team was able to quantify if people treated partners differently when they were told their partners were bots. The researchers tracked the degree to which any prejudice against the bots existed, and how these attitudes impacted interactions with bots who identified themselves.
The results of the experiment demonstrated that bots were more effective at engendering cooperation from their partners when the human believed that the bot was also a human. However, when it was revealed that the bot was a bot, cooperation levels dropped. Rahwan explained that while many scientists and ethicists agree that AI should be transparent regarding how decisions are made, it’s less clear that they should also be transparent about their nature when communicating with others.
Last year, Google Duplex made a splash when a stage demo showed that it was capable of making phone calls and booking appointments on behalf of its use, generating human-like speech so sophisticated that many people would have mistaken it for a real person had they not been told they were speaking to a bot. Since the debut of Google Duplex, many AI and robot ethicists voiced their concerns over the technology, prompting Google to say that it would have the agent identify itself as a bot in the future. Currently, Google Duplex is only being used in a very limited capacity. It will soon see use in New Zealand, but only to check for the operating hours of businesses. Ethicists are still worried about the degree to which the technology could be misused.
Rahawan argues that the recent study demonstrates that we should consider what costs we are willing to pay in return for transparency:
“Is it ethical to develop such a system? Should we prohibit bots from passing as humans, and force them to be transparent about who they are? If the answer is ‘Yes’, then our findings highlight the need to set standards for the efficiency cost that we are willing to pay in return for such transparency.”
Researchers Training Plastic to Walk Under Light
Researchers in Finland are currently working on developing and “training” pieces of plastic to be commanded by light. This is the first time that a synthetic actuator, in this case thermoplastic, is able to “learn” how to do a new action, in this case walking, based on its past experiences and not computer programming.
The plastics in this project are made from thermo-responsive liquid crystal polymer network and a coat of dye. They are soft actuators that are able to convert energy into mechanical motion. The actuator was at first only able to respond to heat, but that is changing since light can be associated with heat. Because of this, the plastic is able to respond to light. The actuator is somewhat flexible and bends itself in a similar way that a human bends its index finger. When the actuator has light projected onto it and therefore becomes heated, it “walks” similarly to an inchworm, and it moves at a speed of 1 mm/s, or the same pace as a snail.
Arri Priimägi is a senior author of Tampere University.
“Our research is essentially asking the question if an inanimate material can somehow learn in a very simplistic sense,” he says. “My colleague, Professor Olli Ikkala from Aalto University, posed the question: Can materials learn, and what does it mean if materials would learn? We then joined forces in this research to make robots that would somehow learn new tricks.”
Other members of the research team include postdoctoral researchers Hao Zeng, Tampere University, and Hang Zhang, Aalto University.
There is also a conditioning process that associates light with heat, and it involves allowing the dye on the surface to diffuse throughout the actuator, which turns it blue. The overall light absorption is increased, and the photothermal effect is increased as well. The actuator’s temperature also raises, and it then bends upon irradiation.
According to Priimägi, the team was inspired by another well-known experiment.
“This study that we did was inspired by Pavlov’s dog experiment,” says Priimägi.
In that famous experiment, a dog salivated in response to seeing food, and Pavlov then rang the bell before giving the dog food. This was repeated a few times, and the dog eventually associated food with the bell and started salivating once he heard the bell.
“If you think about our system, heat corresponds to the food, and the light would correspond to the bell in Pavlov’s experiment.”
“Many will say that we are pushing this analogy too far,” says Priimägi. “In some sense, those people are right because compared to biological systems, the material we studied is very simple and limited. But under the right circumstances, the analogy holds.”
The team will now increase the complexability and controllability of the systems, and this will help find certain limits of the analogies that can be drawn to biological systems.
“We aim at asking questions which maybe allow us to look at inanimate materials from a new light.”
The systems can do more than just walk. They are able to “recognize” and respond to different wavelengths of light that correspond to the coating of its dye. Because of this, the material becomes a tunable soft micro-robot that is capable of being remotely controlled, which is extremely useful for biomedical applications.
“I think there’s a lot of cool aspects there. These remotely controlled liquid crystal networks behave like small artificial muscles,” says Priimägi. “I hope and believe there are many ways that they can benefit the biomedical field, among other fields such as photonics, in the future.”
AI Project By F-Secure To Harness Potential of ‘Swarm Intelligence’
The cybersecurity company F-Secure has recently created a new AI project that utilizes techniques inspired by “swarm-intelligence”. As AI News reports, F-Secure’s new AI approach makes use of many decentralized AI agents that all collaborate in order to carry out accomplish specific goals.
F-Secure’s new swarm AI is similar in concept to Fetch AI’s earlier take on decentralized AI systems, which have been applied to IoT concepts. However, unlike Fetch AI, F-Secure is aiming to take the concept of decentralized AI and use it in the cybersecurity domain. Specifically, F-Secure is aiming to improve the company’s detection and response capabilities.
As explained by Matti Aksela, the VP of AI at F-Secure, it is commonly believed that AI should aim to copy human intelligence. However, while patterning AI systems after human reasoning and behavior isn’t inherently bad, Aksela explained to AI-News that only patterning AI after human cognition is limiting what we can do with AI. Aksela explained that we can look outside of human cognition and explore other methods of organizing and architecturing AI. A wider range of possible models for AI can augment what people can already accomplish with AI.
Swarm intelligence is a behavior of decentralized systems. It’s a collective behavior that manifests itself in both artificial and natural systems. In terms of biological systems, swarm intelligence is often seen in large colonies of organisms like ants, bees, fish, and birds. For instance, many birds migrate in large flocks and as the flock travels it maintains a consistent formation that fluctuates very little, with the birds only deviating a few inches from one another in their formation. It is thought that flying in such formations reduces the energy that the birds require to fly.
Swarm intelligence has been used for probabilistic routing in telecommunication networks and in the creation of microbots. One example of this concept is the tiny robots created by MicroFactory. The robots are controlled by a circuit board that generates a magnetic field, and the robots themselves are magnets. The robots are also equipped with small manipulation tools that they can use to interact with the environment around them and manipulate objects.
The development of genuinely human-like artificial intelligence, or Artificial General Intelligence, will take some time to be created. Estimates by various AI experts vary, but on average it is thought that it will take around 50 years to succeed in the creation of an AGI. In contrast, the development of distributed autonomous agents like the ones F-Secure should take a significantly shorter time.
According to F-Secure, several years more years of development will be needed to for their distributed intelligence architecture to reach its full potential, but some mechanisms based on the swarm-intelligence model are already in use. F-Scale has used some swarm-intelligence techniques to detect breaches and engineer solutions.
F-Secure’s AI agents are capable of communicating with each other and collaborating.
Swarm intelligence techniques make use of the talents or capabilities of individual agents in the agent pool, and when these skills are networked together there is a robust and flexible system capable of carrying out complex tasks.
“Essentially, you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone,” Aksela explained.
In the specific case of F-Secure the different agents are capable of learning from different networks and hosts, and the agents can thread spread this knowledge through the wider network which joins together different organizations. F-Secure says one of the main benefits of this approach is that it can enable the organization to share sensitive information via the cloud and still remain protected due to superior break and attack detection.
Scientists Developing Robotic Networks to Make Smart Satellites
Scientists are currently developing independent robotic networks that work together in order to create smart satellites. Those smart satellites could then be used to repair others in space. Currently, it is extremely difficult to do anything to broken satellites, which happens quite often. Because there is no real solution, the expensive satellites end up orbiting Earth for years until they are brought back into the atmosphere by gravity.
Ou Ma, a professor from the University of Cincinnati, is engineering robotics technology to fix the orbiting satellites before they break. He runs the Intelligent Robotics and Autonomous Lab at the university, and he would like to create robotic satellites that are capable of docking with other satellites for repairs and refueling.
The best repair satellite will be capable of performing multiple tasks, according to Ma. He has a long career involving various projects that deal with robotic arms on the International Space Station, as well as the former space shuttle program.
In the lab, Ma and UC senior research associate Anoop Sathyan are working on robotic networks that work independently and collaboratively on a common task.
In their latest study, the pair used a group of robots and tested them with a novel game involving strings to move an attached token to a target spot on a table The robots each control one string, so they need the help of others in order to move the token to the right spot. To do this, they release or increase the tension on the string in response to each robot’s actions.
The team uses an artificial intelligence termed genetic fuzzy logic, and they were able to get the three robots, later five, to move the token to the desired spot.
The results of the research and experiments were published in the journal Robotica this month.
When the researchers used five different robots, they learned that the task can be completed even if one of them malfunctions.
“This will be especially true for problems with larger numbers of robots where the liability of an individual robot will be low,” the researchers concluded.
According to Ma, every satellite launch has the possibility of countless problems, and it is almost always impossible to do anything about it once the satellite is deployed.
Earlier this year, a $400 million Intelsat satellite, the same size as a small school bus, malfunctioned after reaching a high elliptical orbit. Some of the first 60 Starlink satellites launched by SpaceX also malfunctioned this year. In the case of SpaceX, the satellites were designed to orbit Earth at a low altitude, causing them to decay after a few years.
The most well-known of all took place in 1990 when the Hubble Space Telescope was deployed. NASA later learned that the mirror was warped, and a subsequent repair mission aboard the space shuttle Endeavor took place in 1993. That mission set out to replace the mirror, allowing images of the universe to make it back to Earth.
Sending humans to space in order to repair satellites is extremely expensive, according to Ma. The missions can cost billions of dollars and are difficult to complete.
The issues become more prominent every time a satellite is launched.
“Big commercial satellites are costly. They run out of fuel or malfunction or break down,” Ma said. “They would like to be able to go up there and fix it, but nowadays it’s impossible.”
NASA is looking to launch a satellite in 2022 that is capable of refueling others in low Earth orbit. They will set out to intercept and refuel a U.S. government satellite. The project is called Restore-L, and it is expected to be the proof of concept for autonomous satellite repairs, according to NASA.
Maxar, a company out of Colorado, will be responsible for the spacecraft infrastructure and robotic arms for the project.
According to John Lymer, chief roboticist at Maxar, most satellites fail because they run out of fuel.
“You’re retiring a perfectly good satellite because it ran out of gas.” he said.
“Ou Ma, who I’ve worked with for many years, works on rendezvous and proximity organization. There are all kinds of technical solutions out there. Some will be better than others. It’s about getting operational experience to find out whose algorithms are better and what reduces operational risk the most.”