Connect with us

Natural Language Processing

Startups Creating AI Tools To Detect Email Harassment

mm

Published

 on

Startups Creating AI Tools To Detect Email Harassment

Since the Me Too movement came to prominence in late 2017, more and more attention is paid to incidents of sexual harassment, including workplace harassment and harassment through email or instant messaging.

As reported by The Guardian, AI researchers and engineers have been creating tools to detect harassment through text communications, dubbed MeTooBots. MeTooBots are being implemented by companies around the world in order to flag potentially harmful and harassing communications. One example of this is a bot created by the company Nex AI, which is currently being used by around 50 different companies. The bot utilizes an algorithm that examines company documents, chat and emails and compares it to its training data of bullying or harassing messages. Messages deemed potentially harassing or harmful can then be sent to an HR manager for review, although Nex AI has not revealed the specific terms that the bot looks for across communications it analyzes.

Other startups have also created AI-powered harassment detection tools. The AI startup Spot owns a chatbot that is capable of enabling employees to anonymously report allegations of sexual harassment. The bot will ask questions and give advice in order to collect more details and further an investigation into the incident. Spot wants to help HR teams deal with harassment issues in a sensitive manner while still ensuring anonymity is preserved.

According to The Guardian, Prof. Brian Subirana, MIT and Harvard AI professor, explained that attempts to use AI to detect harassment have their limitations. Harassment can be very subtle and hard to pick up, frequently manifesting itself only as a pattern that reveals itself when examining weeks of data. Bots also can’t, as of yet, go beyond the detection of certain trigger words and analyze the broader interpersonal or cultural dynamics that could potentially be at play. Despite the complexities of detecting harassment, Subirana does believe that bots could play a role in combating online harassment. Subirana could see the bots being used to train people to detect harassment when they see it, creating a database of potentially problematic messages. Subirana also stated that there could be a placebo effect that makes people less likely to harass their colleagues even they suspect their messages may be being scrutinized, even if they aren’t.

While Subirana does believe that bots have their potential uses in combating harassment, Subirana also argued that confidentiality of data and privacy is a major concern. Subirana states that such technology could potentially create an atmosphere of distrust and suspicion if misused. Sam Smethers, the chief executive of women’s rights NGO the Fawcett Society, also expressed concern about how the bots could be misused. Smethers stated:

“We would want to look carefully at how the technology is being developed, who is behind it, and whether the approach taken is informed by a workplace culture that is seeking to prevent harassment and promote equality, or whether it is in fact just another way to control their employees.”

Methods of using bots to detect harassment and still protect anonymity and privacy will have to be worked out between bot developers, companies, and regulators. Some possible methods of utilizing the predictive power of bots and AI while still safeguarding privacy include keeping communications anonymous. For instance, reports could be generated by the bot that only includes the presence of potentially harmful language and counts of how often the possibly harassing language appears. HR could then get an idea if uses of toxic language are dropping following awareness seminars, or conversely could determine if they should be on the lookout for increased harassment.

Despite the disagreement over appropriate uses of machine learning algorithms and bots in detecting harassment, both sides seem to agree that the ultimate decision to get intervene in cases of harassment should be by a human, and that bots should only ever alert people to matched patterns rather than saying definitively that something was an instance of harassment.

Spread the love

Cybersecurity

Deep Learning Used to Trick Hackers

Published

on

Deep Learning Used to Trick Hackers

A group of computer scientists at the University of Texas at Dallas have developed a new approach for defending against cybersecurity. Rather than blocking hackers, they entice them in.

The newly developed method is called DEEP-Dig (DEcEPtion DIGging), and it entices hackers into a decoy site so that the computer can learn their tactics. The computer is then trained with the information in order to recognize and stop future attacks.

The UT Dallas researchers presented their paper titled “Improving Intrusion Detectors by Crook-Sourcing,” at the annual Computer Security Applications Conference in December in Puerto Rico. The group also presented “Automating Cyberdeception Evaluation with Deep Learning” at the Hawaii International Conference of System Sciences in January.

DEEP-Dig is part of an increasingly popular cybersecurity field called deception technology. As evident by the name, this field relies on traps that are set for hackers. Researchers are hoping that this will be able to be used effectively for defense organizations. 

Dr. Kevin Hamlen is a Eugene McDermott Professor of computer science.

“There are criminals trying to attack our networks all the time, and normally we view that as a negative thing,” he said. “Instead of blocking them, maybe what we could be doing is viewing these attackers as a source of free labor. They’re providing us data about what malicious attacks look like. It’s a free source of highly prized data.”

This new approach is being used to solve some of the major problems associated with the use of artificial intelligence (AI) for cybersecurity. One of those problems is that there is a shortage of data needed to train computers to detect hackers, and this is caused by privacy concerns. According to Gbadebo Ayoade MS’14, PhD’19, better data means a better ability to detect attacks. Ayoade presented the findings at the conferences, and he is now a data scientist at Procter & Gamble Co.

“We’re using the data from hackers to train the machine to identify an attack,” said Ayoade. “We’re using deception to get better data.”

The most common method used by hackers is to begin with simpler tricks and progressively get more sophisticated, according to Hamlen. Most of the cyber defense programs being used today attempt to disrupt the intruders immediately, so the intruders’ techniques are never learned. DEEP-Dig attempts to solve this by pushing the hackers into a decoy site full of disinformation so that the techniques can be observed. According to Dr. Latifur Khan, professor of computer science at UT Dallas, the decoy site appears legitimate to the hackers.

“Attackers will feel they’re successful,” Khan said.

Cyberattacks are a major concern for governmental agencies, businesses, nonprofits, and individuals. According to a report to the White House from the Council of Economic Advisers, the attacks cost the U.S. economy more than $57 billion in 2016.

DEEP-Dig could play a major role in evolving defense tactics at the same time hacking techniques evolve. The intruders could disrupt the method if they realize they have entered into a decoy site, but Hamlen is not overly concerned. 

“So far, we’ve found this doesn’t work. When an attacker tries to play along, the defense system just learns how hackers try to hide their tracks,” Hamlen said. “It’s an all-win situation — for us, that is.”

Other researchers involved in the work include Frederico Araujo PhD’16, research scientist at IBM’s Thomas J. Watson Research Center; Khaled Al-Naami PhD’17; Yang Gao, a UT Dallas computer science graduate student; and Dr. Ahmad Mustafa of Jordan University of Science and Technology.

The research was partly supported by the Office of Naval Research, the National Security Agency, the National Science Foundation, and the Air Force Office of Scientific Research.

 

Spread the love
Continue Reading

Cybersecurity

AI Security Monitoring & Job Recruitment Companies Raise Funds

mm

Published

on

AI Security Monitoring & Job Recruitment Companies Raise Funds

VentureBeat reports on two new high fundings for startups developing artificial intelligence. Umbo Computer Vision (UCV) works on autonomous video security systems to businesses, while Xor is developing an AI chatbot platform for recruiters and job seekers. Both startups are located in San Francisco, and UCV is a joint venture with Taiwan and has a base there and in the UK too. UCV raised $8 million for its Ai-powered video security, while Xor managed to raise $8.4 million for its project.

Xor’s capital infusion came after a year in which the startup tripled its sales in the US, “reaching $2 million in annual recurring revenue and closing deals with over 100 customers in 15 countries, including ExxonMobil, Ikea, Baxter Personnel, Heineken, IBS, Aldi, Hoff, McDonald’s, and Mars.” As the company co-founder and CEO Aida Fazylova explains, she “started the company to let recruiters focus on the human touch — building relationships, interviewing candidates, and attracting the best talent to their companies. Meanwhile, AI takes care of repetitive tasks and provides 24/7 personalized service to every candidate. We are proud to get support from SignalFire and other amazing investors who help us drive our mission to make the recruitment experience better and more transparent for everyone.”

Xor’s chatbot “automates tedious job recruitment tasks, like scheduling interviews; sorting applications; and responding to questions via email, text, and messaging apps like Facebook Messenger and Skype. The eponymous Xor — which is hosted on Microsoft’s Azure — draws on over 500 sources for suitable candidates and screens those candidates autonomously, leveraging 103 different languages and algorithms trained on 17 different HR and recruitment data sets.”

According to Grand View Research, the chatbot market is expected to reach $1.23 billion by 2025, while Gartner predicts that chatbots will power 85% of all customer service interactions by the year 2020.

For its part, Umbo develops “ software, hardware, and AI smarts that can detect and identify human behaviors related to security, such as intrusion, tailgating (when an unauthorized individual follows someone into private premises), and wall-scaling.”

The company says it has developed its AI systems entirely in-house, and their system incorporates three components.“AiCameras are built in-house and feature built-in AI chips, connecting directly to the cloud to bypass servers and video recording intermediates, such as NVRs or DVRsLight is AI-powered software for detecting and issuing alerts on human-related security actions.” There is also “TruePlatform, a centralized platform where businesses can monitor and manage all their cameras, users, and security events.” As Shawn Guan, Umbo’s cofounder and CEO points out, the company launched  Umbo Light, “which implemented feedback that we gathered from our customers about what their primary wants from video security systems were. This allowed us to design and deliver a system based on the needs of those who use it most.”

The global video surveillance market, which is now practically relying on the use of AI, was pegged at $28 billion in 2017 and is expected to grow to more than $87 billion by 2025.

 

Spread the love
Continue Reading

Cybersecurity

Cybersecurity Experts Defend from AI Cyberattacks

mm

Published

on

Cybersecurity Experts Defend from AI Cyberattacks

Not everybody with good intentions is set to use the advantages of artificial intelligence. Cybersecurity is certainly one of those fields where both those trying to defend a certain cyber system and those trying to attack it are using the most advanced technologies.

In its analysis of the subject, World Economic Forum (WEF) cites an example when in march 2019, “the CEO of a large energy firm sanctioned the urgent transfer of €220,000 to what he believed to be the account of a new Eastern European supplier after a call he believed to be with the CEO of his parent company. Within hours, the money had passed through a network of accounts in Latin America to suspected criminals who had used artificial intelligence (AI) to convincingly mimic the voice of the CEO.” For their part, Forbes cites an example when “two hospitals in Ohio and West Virginia turned patients away due to a ransomware attack that led to a system failure. The hospitals could not process any emergency patient requests. Hence, they sent incoming patients to nearby hospitals.”

This cybersecurity threat is certainly the reason why Equifax and the World Economic Forum convened the inaugural Future Series: Cybercrime 2025. Global cybersecurity experts from academia, government, law enforcement, and the private sector are set to meet in Atlanta, Georgia to review the capabilities AI can give them in the field of cybersecurity. Also, Capgemini Research Institute came up with a report that concludes that building up cybersecurity defenses with AI is imperative fro practically all organizations.

In their analysis, WEF, indicated four challenges in preventing the use of AI in cybercrime. The first is the increasing sophistication of attackers – the volume of attacks will be on the rise, and “AI-enabled technology may also enhance attackers’ abilities to preserve both their anonymity and distance from their victims in an environment where attributing and investigating crimes is already challenging.”

The second is the asymmetry in the goals – while defenders must have a 100% success rate, the attackers need to be successful only once. “While AI and automation are reducing variability and cost, improving scale and limiting errors, attackers may also use AI to tip the balance.”

The third is the fact that as “organizations continue to grow, so do the size and complexity of their technology and data estates, meaning attackers have more surfaces to explore and exploit. To stay ahead of attackers, organizations can deploy advanced technologies such as AI and automation to help create defensible ‘choke points’ rather than spreading efforts equally across the entire environment.”

The fourth would be to achieve the right balance between the possible risks and actual “operational enablement” of the defenders. WEF is the opinion that “security teams can use a risk-based approach, by establishing governance processes and materiality thresholds, informing operational leaders of their cybersecurity posture, and identifying initiatives to continuously improve it.” Through their Future Series: Cybercrime 2025 program, WEF, and its partners are seeking “to identify the effective actions needed to mitigate and overcome these risks.”

For their part, Forbes has identified four steps of direct use of AI in cybersecurity prepared by their contributor Naveen Joshi and presented in the bellow graphic:

Cybersecurity Experts Defend from AI Cyberattacks

In any case, it is certain that both defenders and attackers in the field of cybersecurity will keep on developing their use of artificial intelligence as the technology itself reaches a new stage of complexity.

 

Spread the love
Continue Reading