stub Facebook Uses Bots and Simulations to Try and Counter Bad Behavior of Users - Unite.AI
Connect with us

Artificial Intelligence

Facebook Uses Bots and Simulations to Try and Counter Bad Behavior of Users

mm
Updated on

Facebook has designed a new AI intended to better detect harmful, damaging, and illegal behavior. As The Verge reports, the researchers at Facebook’s London AI department have created an AI-driven Facebook simulator called “WW”, which is used to simulate the behavior of scammers, illegal product merchants, spammers, and other bad actors on a simulated version of Facebook itself.

The Facebook simulator known as “WW”, taking its name from a truncation of “WWW”, was revealed by Facebook in a paper published in April of this year. WW is a cloned, contained version of Facebook intended to assist in the testing of various Facebook tools and algorithms.

The company recently just divulged more details concerning some of its uses for WW, one of which is the simulation of bad actors through AI. By using a number of bots to simulate behavior like scamming, spamming, harassment and more, the researchers hope that they will be able to better detect and counter harmful behavior by users.

According to Facebook engineer Mark Harman, as quoted by the Verge, it’s anticipated that WW will be a valuable tool in curbing various harmful behaviors on Facebook. For instance, Harman believes that the simulations can be used to engineer better methods of detecting scammers.

Facebook engineers mimicked the behavior of real life Facebook scammers by creating two groups of bots: one group of targets and one group of scammers. Scammers often hunt through networks of friends, exploring the friends of users, in order to find a potential target. This behavior was mimicked by the scammer bots as the engineers experimented with different methods of preventing the innocent bot from being scammed. The tactics they experimented with included numerous constraints like limiting how many private messages a bot could send every minute.

There are a few ways that the simulated Facebook differs from the real thing. For one, the simulation of Facebook doesn’t actually include any visual elements, so data derived from the simulation is all in the form of numerical data and statistics on interactions between bots. For another, all actors in the simulation are bots, which aren’t capable of interacting with real users. The WW simulation also can’t account for things like user intent or the content of a given conversation, as only the actions of sending messages, making comments, etc. are simulated.

According to Harman, this process of experimenting with constraints is similar to urban planners attempting to reduce speed on certain roads by laying down “speed bumps”. Similar to how a city planner would experiment with creating speed bumps and then collecting data on their utility, the engineers analyzed how messages and interactions between bots varied in their simulator as they varied parameters and constraints. Harman explains that the goal is to get an idea of what changes could be made to Facebook’s platform to inhibit harmful behavior without severely limiting normal behavior, or the free flow of traffic.

Harman also explains that the benefit of using WW for their simulations is that the actions they study are occurring on real Facebook infrastructure which gives them a much better idea of how their proposed changes could impact real Facebook users. Any applications of these findings will have to wait some time, as right now WW and its simulation are just in the research stage. Harman and other Facebook researchers won’t actually be applying their findings to the live version of Facebook yet, as there is still a lot of work left to be done. The research group needs to ascertain that the simulations they create adequately match real human behavior.

The main benefit of WW, according to Harman, is its ability to operate on a massive scale, letting Facebook researchers check the potential consequences of thousands of different minor tweaks, all through the simulations it produces.

In the future, the researchers might let the bots just play around and experiment for a while, to see what kinds of interactions they come up with on their own, which can often be something that researchers aren’t even anticipating.

“At the moment, the main focus is training the bots to imitate things we know happen on the platform. But in theory and in practice, the bots can do things we haven’t seen before,” said Harman. “That’s actually something we want, because we ultimately want to get ahead of the bad behavior rather than continually playing catch up.”

If all goes well, Facebook could start making modifications based on WW’s simulations by the end of 2020.