Connect with us

Cybersecurity

Artificial Intelligence Is Now Being Used To Detect Cyberbullying in School Children

mm

Published

 on

Artificial Intelligence Is Now Being Used To Detect Cyberbullying in School Children

Data about bullying, self-harm and cyberbullying among school children around the world is becoming alarming. As presented by Jun Wu, data from the US shows that in 2017, According to the National Center for Education Statistics and Bureau of Justice, about 20% of students ages 12–18 experienced bullying. According to the Center for Disease Control and Prevention, 19% of students in grades 9–12 report being bullied on school property in the 12 months preceding the survey.

What is becoming even more alarming, spreading beyond the school grounds themselves is the rise of cyberbullying. As Wu points out, “harassment in online forums, by emails, and on social media platforms can often be more damaging to the victim’s mental health than in-person bullying. Cyberbullying can often be an escalation from the school bullying. At the same time, bullying can start on social media, then work its way into the classroom.”

In Australia, researchers are reporting about the phenomenon named Momo, which involves a situation in which “cyber predators were taking on a persona called Momo and contacting children via social media asking them to hurt themselves, had sent ripples of concern through schools across the country.”

Preventing such bullying and possible self-harmed caused by depression has prompted a number of artificial intelligence developers to try to seek solutions to this widespread problem.

As SkyNews reports, some British schools have started using an AI tool called AS Tracking, developed by a company called STEER, which came into use at 150 schools in Britain. The tool involves students taking an online psychological test, and in September 2019 the test will be taken by 50,000 schoolchildren.

As is explained, the test, asks students to imagine a space they feel comfortable in, then poses a series of abstract questions, such as “how easy is it for somebody to come into your space?” The child can then respond by clicking a button on a scale that runs from “very easy” to “very difficult”. The results are sent to STEER, “which compares the data with its psychological model, then flags students who need attention in its teacher dashboard.”

According to Dr. Jo Walker, co-founder of STEER, “our tool highlights those particular children who are struggling at this particular phase of their development and it points the teachers to how that child is thinking.” He adds that “since introducing it the college has seen a 20% decrease in self-harm.”

In her analysis, Wu mentions a number of AI developers in the US that are helping with the problem. Securly uses AI to create “web filtering, cyberbullying monitoring, and self-harm alerts for schools. Schools can issue Apple devices and Chromebooks to students while monitoring the student’s cyber activities. Parents can also use the apps on their home devices to monitor their children’s online activities.” Bark uses AI to monitor text messages, YouTube, emails, and 24 different social networks to alert parents of potential safety concerns. SN Technologies Corp goes a step further, as its AI solutions use facial recognition to track ‘blacklisted’ students in schools from videos of surveillance cameras in schools themselves.

In Australia, cybersecurity startup Saasyan Assure developed an AI method that could help teachers track students when they were watching “Momo Challenge” videos. Greg Margossian, head of Saasyan Assure said that his company “just made sure to ensure that the ‘Momo’ keyword was in all the client’s databases, without them even thinking about it.” It is added that the company “offers a subscription software that can be added to all devices at school to create a historical footprint of each student’s computer use and ping teachers if any risks, from bullying to possible self-harm or violence, emerge.”

 

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Cybersecurity

Deep Learning Used to Trick Hackers

Published

on

Deep Learning Used to Trick Hackers

A group of computer scientists at the University of Texas at Dallas have developed a new approach for defending against cybersecurity. Rather than blocking hackers, they entice them in.

The newly developed method is called DEEP-Dig (DEcEPtion DIGging), and it entices hackers into a decoy site so that the computer can learn their tactics. The computer is then trained with the information in order to recognize and stop future attacks.

The UT Dallas researchers presented their paper titled “Improving Intrusion Detectors by Crook-Sourcing,” at the annual Computer Security Applications Conference in December in Puerto Rico. The group also presented “Automating Cyberdeception Evaluation with Deep Learning” at the Hawaii International Conference of System Sciences in January.

DEEP-Dig is part of an increasingly popular cybersecurity field called deception technology. As evident by the name, this field relies on traps that are set for hackers. Researchers are hoping that this will be able to be used effectively for defense organizations. 

Dr. Kevin Hamlen is a Eugene McDermott Professor of computer science.

“There are criminals trying to attack our networks all the time, and normally we view that as a negative thing,” he said. “Instead of blocking them, maybe what we could be doing is viewing these attackers as a source of free labor. They’re providing us data about what malicious attacks look like. It’s a free source of highly prized data.”

This new approach is being used to solve some of the major problems associated with the use of artificial intelligence (AI) for cybersecurity. One of those problems is that there is a shortage of data needed to train computers to detect hackers, and this is caused by privacy concerns. According to Gbadebo Ayoade MS’14, PhD’19, better data means a better ability to detect attacks. Ayoade presented the findings at the conferences, and he is now a data scientist at Procter & Gamble Co.

“We’re using the data from hackers to train the machine to identify an attack,” said Ayoade. “We’re using deception to get better data.”

The most common method used by hackers is to begin with simpler tricks and progressively get more sophisticated, according to Hamlen. Most of the cyber defense programs being used today attempt to disrupt the intruders immediately, so the intruders’ techniques are never learned. DEEP-Dig attempts to solve this by pushing the hackers into a decoy site full of disinformation so that the techniques can be observed. According to Dr. Latifur Khan, professor of computer science at UT Dallas, the decoy site appears legitimate to the hackers.

“Attackers will feel they’re successful,” Khan said.

Cyberattacks are a major concern for governmental agencies, businesses, nonprofits, and individuals. According to a report to the White House from the Council of Economic Advisers, the attacks cost the U.S. economy more than $57 billion in 2016.

DEEP-Dig could play a major role in evolving defense tactics at the same time hacking techniques evolve. The intruders could disrupt the method if they realize they have entered into a decoy site, but Hamlen is not overly concerned. 

“So far, we’ve found this doesn’t work. When an attacker tries to play along, the defense system just learns how hackers try to hide their tracks,” Hamlen said. “It’s an all-win situation — for us, that is.”

Other researchers involved in the work include Frederico Araujo PhD’16, research scientist at IBM’s Thomas J. Watson Research Center; Khaled Al-Naami PhD’17; Yang Gao, a UT Dallas computer science graduate student; and Dr. Ahmad Mustafa of Jordan University of Science and Technology.

The research was partly supported by the Office of Naval Research, the National Security Agency, the National Science Foundation, and the Air Force Office of Scientific Research.

 

Spread the love
Continue Reading

Cybersecurity

Startups Creating AI Tools To Detect Email Harassment

mm

Published

on

Startups Creating AI Tools To Detect Email Harassment

Since the Me Too movement came to prominence in late 2017, more and more attention is paid to incidents of sexual harassment, including workplace harassment and harassment through email or instant messaging.

As reported by The Guardian, AI researchers and engineers have been creating tools to detect harassment through text communications, dubbed MeTooBots. MeTooBots are being implemented by companies around the world in order to flag potentially harmful and harassing communications. One example of this is a bot created by the company Nex AI, which is currently being used by around 50 different companies. The bot utilizes an algorithm that examines company documents, chat and emails and compares it to its training data of bullying or harassing messages. Messages deemed potentially harassing or harmful can then be sent to an HR manager for review, although Nex AI has not revealed the specific terms that the bot looks for across communications it analyzes.

Other startups have also created AI-powered harassment detection tools. The AI startup Spot owns a chatbot that is capable of enabling employees to anonymously report allegations of sexual harassment. The bot will ask questions and give advice in order to collect more details and further an investigation into the incident. Spot wants to help HR teams deal with harassment issues in a sensitive manner while still ensuring anonymity is preserved.

According to The Guardian, Prof. Brian Subirana, MIT and Harvard AI professor, explained that attempts to use AI to detect harassment have their limitations. Harassment can be very subtle and hard to pick up, frequently manifesting itself only as a pattern that reveals itself when examining weeks of data. Bots also can’t, as of yet, go beyond the detection of certain trigger words and analyze the broader interpersonal or cultural dynamics that could potentially be at play. Despite the complexities of detecting harassment, Subirana does believe that bots could play a role in combating online harassment. Subirana could see the bots being used to train people to detect harassment when they see it, creating a database of potentially problematic messages. Subirana also stated that there could be a placebo effect that makes people less likely to harass their colleagues even they suspect their messages may be being scrutinized, even if they aren’t.

While Subirana does believe that bots have their potential uses in combating harassment, Subirana also argued that confidentiality of data and privacy is a major concern. Subirana states that such technology could potentially create an atmosphere of distrust and suspicion if misused. Sam Smethers, the chief executive of women’s rights NGO the Fawcett Society, also expressed concern about how the bots could be misused. Smethers stated:

“We would want to look carefully at how the technology is being developed, who is behind it, and whether the approach taken is informed by a workplace culture that is seeking to prevent harassment and promote equality, or whether it is in fact just another way to control their employees.”

Methods of using bots to detect harassment and still protect anonymity and privacy will have to be worked out between bot developers, companies, and regulators. Some possible methods of utilizing the predictive power of bots and AI while still safeguarding privacy include keeping communications anonymous. For instance, reports could be generated by the bot that only includes the presence of potentially harmful language and counts of how often the possibly harassing language appears. HR could then get an idea if uses of toxic language are dropping following awareness seminars, or conversely could determine if they should be on the lookout for increased harassment.

Despite the disagreement over appropriate uses of machine learning algorithms and bots in detecting harassment, both sides seem to agree that the ultimate decision to get intervene in cases of harassment should be by a human, and that bots should only ever alert people to matched patterns rather than saying definitively that something was an instance of harassment.

Spread the love
Continue Reading

Cybersecurity

AI Security Monitoring & Job Recruitment Companies Raise Funds

mm

Published

on

AI Security Monitoring & Job Recruitment Companies Raise Funds

VentureBeat reports on two new high fundings for startups developing artificial intelligence. Umbo Computer Vision (UCV) works on autonomous video security systems to businesses, while Xor is developing an AI chatbot platform for recruiters and job seekers. Both startups are located in San Francisco, and UCV is a joint venture with Taiwan and has a base there and in the UK too. UCV raised $8 million for its Ai-powered video security, while Xor managed to raise $8.4 million for its project.

Xor’s capital infusion came after a year in which the startup tripled its sales in the US, “reaching $2 million in annual recurring revenue and closing deals with over 100 customers in 15 countries, including ExxonMobil, Ikea, Baxter Personnel, Heineken, IBS, Aldi, Hoff, McDonald’s, and Mars.” As the company co-founder and CEO Aida Fazylova explains, she “started the company to let recruiters focus on the human touch — building relationships, interviewing candidates, and attracting the best talent to their companies. Meanwhile, AI takes care of repetitive tasks and provides 24/7 personalized service to every candidate. We are proud to get support from SignalFire and other amazing investors who help us drive our mission to make the recruitment experience better and more transparent for everyone.”

Xor’s chatbot “automates tedious job recruitment tasks, like scheduling interviews; sorting applications; and responding to questions via email, text, and messaging apps like Facebook Messenger and Skype. The eponymous Xor — which is hosted on Microsoft’s Azure — draws on over 500 sources for suitable candidates and screens those candidates autonomously, leveraging 103 different languages and algorithms trained on 17 different HR and recruitment data sets.”

According to Grand View Research, the chatbot market is expected to reach $1.23 billion by 2025, while Gartner predicts that chatbots will power 85% of all customer service interactions by the year 2020.

For its part, Umbo develops “ software, hardware, and AI smarts that can detect and identify human behaviors related to security, such as intrusion, tailgating (when an unauthorized individual follows someone into private premises), and wall-scaling.”

The company says it has developed its AI systems entirely in-house, and their system incorporates three components.“AiCameras are built in-house and feature built-in AI chips, connecting directly to the cloud to bypass servers and video recording intermediates, such as NVRs or DVRsLight is AI-powered software for detecting and issuing alerts on human-related security actions.” There is also “TruePlatform, a centralized platform where businesses can monitor and manage all their cameras, users, and security events.” As Shawn Guan, Umbo’s cofounder and CEO points out, the company launched  Umbo Light, “which implemented feedback that we gathered from our customers about what their primary wants from video security systems were. This allowed us to design and deliver a system based on the needs of those who use it most.”

The global video surveillance market, which is now practically relying on the use of AI, was pegged at $28 billion in 2017 and is expected to grow to more than $87 billion by 2025.

 

Spread the love
Continue Reading