Fake news and hate speech online are becoming not a daily, but a-minute-by-minute problem online. The IkigaiLab reports that Facebook and Twitter only recently had to close more than 1.5 billion and 70 million amounts respectively just to try to at least curb the spread of fake news and hate speech around the world.
Still, at the moment, such a task requires enormous human power and almost constant working hours to just take a tip of the hate speech iceberg. To resolve the problem, researchers in numerous labs are starting to train artificial intelligence (AI) to help with this humongous task.
Ikigai cites the Rosetta system that Facebook is using to understand the authenticity of the news, images or other content that is uploaded on that social media. As is explained, what Rosetta does is scan “the word, picture, language, font, date of the post amongst other variables and tries to see if the information being presented is genuine or not.” After the system gathers the information and having in mind that AI is still not fully “adept at understanding innuendoes, references, slights and the contexts in which the content was posted,” the human moderators take over and guide the AI system to discover hate speech and fake news.
To try to further develop the ability of the AI systems to be able to cover all the possible nuances that characterise hate speech, a team of researchers at the UC Santa Barbara and Intel, as TheNextWeb (TNW) reports, “took thousands of conversations from the scummiest communities on Reddit and Gab and used them to develop and train AI to combat hate speech.”
According to their report, to do so, the joint group of researchers created a specific dataset featuring “thousands of conversations specially curated to ensure they’d be chock full of hate speech.”They also used a list of the groups on Reddit that are mostly characterized by the use of hate speech compiled by Justin Caffier of Vox.
The researchers ended up collecting “more than 22,000 comments from Reddit and over 33,000 from Gab.” They discovered that the two sites show similar popular hate keywords, but the distributions are very different.
They noted that due to these differences it is very hard for social media, in general, to intervene in real-time since the flow of hate speech is so high that it would require countless real persons to follow it.
To take the problem, the research team started to train AI to intervene. Their initial database was sent to Amazon Turk workers to be labeled. After identifying the individual instances of hate speech, the workers came up with phrases that AI would be used “to deter users from posting similar hate speech in the future.”
Based on that, the team “ran this dataset and its database of interventions through various machine learning and natural language processing systems and created a sort of prototype for an online hate speech intervention AI.”
The results produced were excellent, but since the development is still at an early stage, the system is not ready yet to be actively used. As it is explained, “the system, in theory, should detect hate speech and immediately send a message to the poster letting them know why they shouldn’t post things that obviously represent hate speech. This relies on more than just keyword detection – in order for the AI to work it has to get the context right.”