stub Startups Creating AI Tools To Detect Email Harassment - Unite.AI
Connect with us

Artificial Intelligence

Startups Creating AI Tools To Detect Email Harassment

mm
Updated on

Since the Me Too movement came to prominence in late 2017, more and more attention is paid to incidents of sexual harassment, including workplace harassment and harassment through email or instant messaging.

As reported by The Guardian, AI researchers and engineers have been creating tools to detect harassment through text communications, dubbed MeTooBots. MeTooBots are being implemented by companies around the world in order to flag potentially harmful and harassing communications. One example of this is a bot created by the company Nex AI, which is currently being used by around 50 different companies. The bot utilizes an algorithm that examines company documents, chat and emails and compares it to its training data of bullying or harassing messages. Messages deemed potentially harassing or harmful can then be sent to an HR manager for review, although Nex AI has not revealed the specific terms that the bot looks for across communications it analyzes.

Other startups have also created AI-powered harassment detection tools. The AI startup Spot owns a chatbot that is capable of enabling employees to anonymously report allegations of sexual harassment. The bot will ask questions and give advice in order to collect more details and further an investigation into the incident. Spot wants to help HR teams deal with harassment issues in a sensitive manner while still ensuring anonymity is preserved.

According to The Guardian, Prof. Brian Subirana, MIT and Harvard AI professor, explained that attempts to use AI to detect harassment have their limitations. Harassment can be very subtle and hard to pick up, frequently manifesting itself only as a pattern that reveals itself when examining weeks of data. Bots also can’t, as of yet, go beyond the detection of certain trigger words and analyze the broader interpersonal or cultural dynamics that could potentially be at play. Despite the complexities of detecting harassment, Subirana does believe that bots could play a role in combating online harassment. Subirana could see the bots being used to train people to detect harassment when they see it, creating a database of potentially problematic messages. Subirana also stated that there could be a placebo effect that makes people less likely to harass their colleagues even they suspect their messages may be being scrutinized, even if they aren’t.

While Subirana does believe that bots have their potential uses in combating harassment, Subirana also argued that confidentiality of data and privacy is a major concern. Subirana states that such technology could potentially create an atmosphere of distrust and suspicion if misused. Sam Smethers, the chief executive of women’s rights NGO the Fawcett Society, also expressed concern about how the bots could be misused. Smethers stated:

“We would want to look carefully at how the technology is being developed, who is behind it, and whether the approach taken is informed by a workplace culture that is seeking to prevent harassment and promote equality, or whether it is in fact just another way to control their employees.”

Methods of using bots to detect harassment and still protect anonymity and privacy will have to be worked out between bot developers, companies, and regulators. Some possible methods of utilizing the predictive power of bots and AI while still safeguarding privacy include keeping communications anonymous. For instance, reports could be generated by the bot that only includes the presence of potentially harmful language and counts of how often the possibly harassing language appears. HR could then get an idea if uses of toxic language are dropping following awareness seminars, or conversely could determine if they should be on the lookout for increased harassment.

Despite the disagreement over appropriate uses of machine learning algorithms and bots in detecting harassment, both sides seem to agree that the ultimate decision to get intervene in cases of harassment should be by a human, and that bots should only ever alert people to matched patterns rather than saying definitively that something was an instance of harassment.

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.