Connect with us

Cybersecurity

Deep Learning Used to Trick Hackers

Published

 on

Deep Learning Used to Trick Hackers

A group of computer scientists at the University of Texas at Dallas have developed a new approach for defending against cybersecurity. Rather than blocking hackers, they entice them in.

The newly developed method is called DEEP-Dig (DEcEPtion DIGging), and it entices hackers into a decoy site so that the computer can learn their tactics. The computer is then trained with the information in order to recognize and stop future attacks.

The UT Dallas researchers presented their paper titled “Improving Intrusion Detectors by Crook-Sourcing,” at the annual Computer Security Applications Conference in December in Puerto Rico. The group also presented “Automating Cyberdeception Evaluation with Deep Learning” at the Hawaii International Conference of System Sciences in January.

DEEP-Dig is part of an increasingly popular cybersecurity field called deception technology. As evident by the name, this field relies on traps that are set for hackers. Researchers are hoping that this will be able to be used effectively for defense organizations. 

Dr. Kevin Hamlen is a Eugene McDermott Professor of computer science.

“There are criminals trying to attack our networks all the time, and normally we view that as a negative thing,” he said. “Instead of blocking them, maybe what we could be doing is viewing these attackers as a source of free labor. They’re providing us data about what malicious attacks look like. It’s a free source of highly prized data.”

This new approach is being used to solve some of the major problems associated with the use of artificial intelligence (AI) for cybersecurity. One of those problems is that there is a shortage of data needed to train computers to detect hackers, and this is caused by privacy concerns. According to Gbadebo Ayoade MS’14, PhD’19, better data means a better ability to detect attacks. Ayoade presented the findings at the conferences, and he is now a data scientist at Procter & Gamble Co.

“We’re using the data from hackers to train the machine to identify an attack,” said Ayoade. “We’re using deception to get better data.”

The most common method used by hackers is to begin with simpler tricks and progressively get more sophisticated, according to Hamlen. Most of the cyber defense programs being used today attempt to disrupt the intruders immediately, so the intruders’ techniques are never learned. DEEP-Dig attempts to solve this by pushing the hackers into a decoy site full of disinformation so that the techniques can be observed. According to Dr. Latifur Khan, professor of computer science at UT Dallas, the decoy site appears legitimate to the hackers.

“Attackers will feel they’re successful,” Khan said.

Cyberattacks are a major concern for governmental agencies, businesses, nonprofits, and individuals. According to a report to the White House from the Council of Economic Advisers, the attacks cost the U.S. economy more than $57 billion in 2016.

DEEP-Dig could play a major role in evolving defense tactics at the same time hacking techniques evolve. The intruders could disrupt the method if they realize they have entered into a decoy site, but Hamlen is not overly concerned. 

“So far, we’ve found this doesn’t work. When an attacker tries to play along, the defense system just learns how hackers try to hide their tracks,” Hamlen said. “It’s an all-win situation — for us, that is.”

Other researchers involved in the work include Frederico Araujo PhD’16, research scientist at IBM’s Thomas J. Watson Research Center; Khaled Al-Naami PhD’17; Yang Gao, a UT Dallas computer science graduate student; and Dr. Ahmad Mustafa of Jordan University of Science and Technology.

The research was partly supported by the Office of Naval Research, the National Security Agency, the National Science Foundation, and the Air Force Office of Scientific Research.

 

Spread the love

Artificial General Intelligence

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab – Interview Series

mm

Published

on

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab - Interview Series

Vahid is an Assistant Professor of Computer Science and Data Science at the University of New Haven. He is also director of the Secure and Assured Intelligent Learning (SAIL) Lab

His research interests include safety and security of intelligent systems, psychological modeling of AI safety problems, security of complex adaptive systems, game theory, multi-agent systems, and cyber-security.

You have an extensive background in cybersecurity and keeping AI safe. Can you share your journey in how you became attracted to both fields?

My research trajectory has been fueled by two core interests of mine: finding out how things break, and learning about the mechanics of the human mind. I have been actively involved in cybersecurity since my early teen years, and consequently built my early research agenda around the classical problems of this domain. Few years into my graduate studies, I stumbled upon a rare opportunity to change my area of research. At that time, I had just come across the early works of Szegedy and Goodfellow on adversarial example attacks, and found the idea of attacking machine learning very intriguing. As I looked deeper into this problem, I came to learn about the more general field of AI safety and security, and found it to encompass many of my core interests, such as cybersecurity, cognitive sciences, economics, and philosophy. I also came to believe that research in this area is not only fascinating, but also vital for ensuring the long-term benefits and safety of the AI revolution.

 

You’re the director of the Secure and Assured Intelligent Learning (SAIL) Lab which works towards laying concrete foundations for the safety and security of intelligent machines. Could you go into some details regarding work undertaken by SAIL?

At SAIL, my students and I work on problems that lie in the intersection of security, AI, and complex systems. The primary focus of our research is on investigating the safety and security of intelligent systems, from both the theoretical and the applied perspectives. On the theoretical side, we are currently investigating the value-alignment problem in multi-agent settings and are developing mathematical tools to evaluate and optimize the objectives of AI agents with regards to stability and robust alignments. On the practical side, some of our projects explore the security vulnerabilities of the cutting-edge AI technologies, such as autonomous vehicles and algorithmic trading, and aim to develop techniques for evaluating and improving the resilience of such technologies to adversarial attacks.

We also work on the applications of machine learning in cybersecurity, such as automated penetration testing, early detection of intrusion attempts, and automated threat intelligence collection and analysis from open sources of data such as social media.

 

You recently led an effort to propose the modeling of AI safety problems as psychopathological disorders. Could you explain what this is?

This project addresses the rapidly growing complexity of AI agents and systems: it is already very difficult to diagnose, predict, and control unsafe behaviors of reinforcement learning agents in non-trivial settings by simply looking at their low-level configurations. In this work, we emphasize the need for higher-level abstractions in investigating such problems. Inspired by the scientific approaches to behavioral problems in humans, we propose psychopathology as a useful high-level abstraction for modeling and analyzing emergent deleterious behaviors in AI and AGI. As a proof of concept, we study the AI safety problem of reward hacking in an RL agent learning to play the classic game of Snake. We show that if we add a “drug” seed to the environment, the agent learns a sub-optimal behavior that can be described via neuroscientific models of addiction. This work also proposes control methodologies based on the treatment approaches used in psychiatry. For instance, we propose the use of artificially-generated reward signals as analogues of medication therapy for modifying the deleterious behavior of agents.

 

Do you have any concerns with AI safety when it comes to autonomous vehicles?

Autonomous vehicles are becoming prominent examples of deploying AI in cyber-physical systems. Considering the fundamental susceptibility of current machine learning technologies to mistakes and adversarial attacks, I am deeply concerned about the safety and security of even semi-autonomous vehicles. Also, the field of autonomous driving suffers from a serious lack of safety standards and evaluation protocols. However, I remain hopeful. Similar to natural intelligence, AI will also be prone to making mistakes. Yet, the objective of self-driving cars can still be satisfied if the rates and impact of such mistakes are made to be lower than those of human drivers. We are witnessing growing efforts to address these issues in the industry and academia, as well as the governments.

 

Hacking street signs with stickers or using other means can confuse the computer vision module of an autonomous vehicle. How big of an issue do you believe this is?

These stickers, and Adversarial Examples in general, give rise to fundamental challenges in the robustness of machine learning models. To quote George E. P. Box, “all models are wrong, but some are useful”. Adversarial examples exploit this “wrong”ness of models, which is due to their abstractive nature, as well as the limitations of sampled data upon which they are trained. Recent efforts in the domain of adversarial machine learning have resulted in tremendous strides towards increasing the resilience of deep learning models to such attacks. From a security point of view, there will always be a way to fool machine learning models. However, the practical objective of securing machine learning models is to increase the cost of implementing such attacks to the point of economic infeasibility.

 

Your focus is on the safety and security features of both deep learning and deep reinforcement learning. Why is this so important?

Reinforcement Learning (RL) is the prominent method of applying machine learning to control problems, which by definition involve the manipulation of their environment. Therefore, I believe systems that are based on RL have significantly higher risks of causing major damages in the real-world compared to other machine learning methods such as classification. This problem is further exacerbated with the integration of Deep learning in RL, which enables the adoption of RL in highly complex settings. Also, it is my opinion that the RL framework is closely related to the underlying mechanisms of cognition in human intelligence, and studying its safety and vulnerabilities can lead to better insights into the limits of decision-making in our minds.

 

Do you believe that we are close to achieving Artificial General Intelligence (AGI)?

This is a notoriously hard question to answer. I believe that we currently have the building blocks of some architectures that can facilitate the emergence of AGI. However, it may take a few more years or decades to improve upon these architectures and enhance the cost-efficiency of training and maintaining these architectures. Over the coming years, our agents are going to grow more intelligent at a rapidly growing rate. I don’t think the emergence of AGI will be announced in the form of a [scientifically valid] headline, but as the result of gradual progress. Also, I think we still do not have a widely accepted methodology to test and detect the existence of an AGI, and this may delay our realization of the first instances of AGI.

 

How do we maintain safety in an AGI system that is capable of thinking for itself and will most likely be exponentially more intelligent than humans?

I believe that the grant unified theory of intelligent behavior is economics and the study of how agents act and interact to achieve what they want. The decisions and actions of humans are determined by their objectives, their information, and the available resources. Societies and collaborative efforts are emergent from its benefits for individual members of such groups. Another example is the criminal code, that deters certain decisions by attaching a high cost to actions that may harm the society. In the same way, I believe that controlling the incentives and resources can enable the emergence a state of equilibrium between humans and instances of AGI. Currently, the AI safety community investigates this thesis under the umbrella of value-alignment problems.

 

One of the areas you closely follow is counterterrorism. Do you have concerns with terrorists taking over AI or AGI systems?

There are numerous concerns about the misuse of AI technologies. In the case of terrorist operations, the major concern is the ease with which terrorists can develop and carry out autonomous attacks. A growing number of my colleagues are actively warning against the risks of developing autonomous weapons (see https://autonomousweapons.org/ ). One of the main problems with AI-enabled weaponry is in the difficulty of controlling the underlying technology: AI is at the forefront of open-source research, and anyone with access to the internet and consumer-grade hardware can develop harmful AI systems. I suspect that the emergence of autonomous weapons is inevitable, and believe that there will soon be a need for new technological solutions to counter such weapons. This can result in a cat-and-mouse cycle that fuels the evolution of AI-enabled weapons, which may give rise to serious existential risks in the long-term.

 

What can we do to keep AI systems safe from these adversarial agents?

The first and foremost step is education: All AI engineers and practitioners need to learn about the vulnerabilities of AI technologies, and consider the relevant risks in the design and implementation of their systems. As for more technical recommendations, there are various proposals and solution concepts that can be employed. For example, training machine learning agents in adversarial settings can improve their resilience and robustness against evasion and policy manipulation attacks (e.g., see my paper titled “Whatever Does Not Kill Deep Reinforcement Learning, Makes it Stronger“). Another solution is to directly account for the risk of adversarial attacks in the architecture of the agent (e.g., Bayesian approaches to risk modeling). There is however a major gap in this area, and it’s the need for universal metrics and methodologies for evaluating the robustness of AI agents against adversarial attacks. Current solutions are mostly ad hoc, and fail to provide general measures of resilience against all types of attacks.

 

Is there anything else that you would like to share about any of these topics?

In 2014, Scully et al. published a paper at the NeurIPS conference with a very enlightening topic: “Machine Learning: The High-Interest Credit Card of Technical Debt“. Even with all the advancements of the field in the past few years, this statement has yet to lose its validity. Current state of AI and machine learning is nothing short of awe-inspiring, but we are yet to fill a significant number of major gaps in both the foundation and the engineering dimensions of AI. This fact, in my opinion, is the most important takeaway of our conversation. I of course do not mean to discourage the commercial adoption of AI technologies, but only wish to enable the engineering community to account for the risks and limits of current AI technologies in their decisions.

I really enjoyed learning about the safety and security challenges about different types of AI systems. This is trully something that individuals, corporations, and governments need to become aware of. Readers who wish to learn more should visit Secure and Assured Intelligent Learning (SAIL) Lab.

Spread the love
Continue Reading

Cybersecurity

Awake Security Plans to Expand After Raising $36 Million

Published

on

Awake Security Plans to Expand After Raising $36 Million

The Santa Clara, California-based startup Awake Security plans to expand after raising $36 million in Series C funding. The company’s cybersecurity platform analyzes network traffic by using artificial intelligence (AI) and human expertise in order to identify internal and external threats. 

The company was founded in 2014 and has since secured around $80 million in total funding, including the Series C round. New investors include Evolution Equity Partners, Energize Ventures, and Liberty Global Ventures, with existing investors being Bain Capital Ventures and Greylock Partners.

“We’re partnering with Awake because we believe its platform can have a big impact in the industrial sector,” Juan Muldoon, partner at Energize Ventures, said. “The challenges with protecting critical infrastructure are changing rapidly, and as the attack surface for digital threats expands, so have the blind spots for many organizations.”

An internally led undisclosed Series B round brought in $12 million in 2018. 

“Awake has assembled the best minds in networking, machine learning, data science, cybersecurity, and other disciplines to create something entirely new that fills a massive void in the security market,” said Rahul Kashyap, CEO of Awake Security. “By partnering with Evolution Equity with its deep U.S. and European network and cybersecurity expertise, and strategic investors Energize Ventures and Liberty Global, we’re building on that momentum to bring the Awake platform to even more organizations around the globe.”

What Awake Security Can Do

Awake Security can identify all devices on a network, as well as whether the device is a phone, tablet, or something else. This allows transparency on networks, where companies can identify devices, users, and applications. The platform relies on machine learning to identify anomalous behaviors. 

The cybersecurity platform combines unsupervised, supervised, and federated machine learning, which uses decentralized data, in order to identify security threats. This is more effective than platforms that rely strictly on unsupervised data, which can result in false positives. 

Awake Security’s system allows security threats to be identified without overly-alerting security teams. Oftentimes, these teams can receive a large number of red flags due to safe behavior, such as individuals working from somewhere other than their usual locations. 

The company has revealed “the world’s first privacy-aware security expert system,” in Ava. According to the company’s website, “Ava combines federated machine learning (ML) with expertise from Awake threat researchers and security analysts to identify multi-stage attacks and enable automatic threat validation and triage.” 

COVID-19

The ongoing COVID-19 pandemic is causing an increase in cybersecurity threats around the world. Companies are not able to deal with cybersecurity issues as effectively as before, due to people not being in offices.

“COVID-19 is a prominent use case,” according to Evolution Equity partner Karthik Subramanian. “If we can identify attacks and compromises in this environment, hopefully we can do something about that. What has happened is the industry, as a whole, is moving toward smarter detection and response in a more timely manner.”

Subramanian led Cisco’s cybersecurity acquisition and investment team before joining Evolution Equity. 

“We invested in Awake because we recognize its unique ability to help organizations fight modern threats. The traction and the third-party recognition Awake has received combined with our resources in and knowledge of the U.S. and European markets only bolsters our conviction,” continued Subramanian.

Increased Cybersecurity Spending and Expansion

Outside of the issues brought on by the current pandemic, spending on cybersecurity is expected to increase modestly by 2023

Awake Security’s annual recurring revenue has increased by about 700 percent over the past year, with the company doubling its amount of employees. 

The plan is for Awake Security to expand after the Series C funding, with Europe as the target. Europe is currently experiencing a skills gap as well as an increase in automation, which makes cybersecurity even more important during this time.

 

Spread the love
Continue Reading

Cybersecurity

Dr. Don Widener, Technical Director of BAE Systems’ Advanced Analytics Lab – Interview Series

mm

Published

on

Dr. Don Widener, Technical Director of BAE Systems’ Advanced Analytics Lab - Interview Series

Don Widener is the Technical Director of BAE Systems’ Advanced Analytics Lab and Intelligence, Surveillance & Reconnaissance (ISR) Analysis Portfolio.

BAE Systems is a global defense, aerospace and security company employing around 83,000 people worldwide. Their wide-ranging products and services cover air, land and naval forces, as well as advanced electronics, security, information technology, and support services.

What was it that initially attracted you personally to AI and robotics?

I’ve always been interested in augmenting the ability of intelligence analysts to be more effective in their mission, whether that is through trade-craft development or technology. With an intelligence analysis background myself, I’ve focused my career on closing the gap between intelligence data collection and decision making.

 

In August, 2019 BAE Systems announced a partnership with UiPath, to launch the Robotic Operations Center which will bring automation and machine learning capabilities to U.S. defense and intelligence communities. Could you describe this partnership?

Democratizing AI for our 2,000+ intelligence analysts is a prime driver for BAE Systems Intelligence & Security sector’s Advanced Analytics Lab. By using Robotic Process Automation (RPA) tools like UiPath we could rapidly augment our analysts with tailored training courses and communities of practice (like the Robotic Operations Center), driving gains in efficiency and effectiveness. Analysts with no programming foundation can build automation models or “bots” to address repetitive tasks.

 

How will the bots from the Robotic Operations Center be used to combat cybercrime?

There is a major need for applying AI to external threat data collection for Cyber Threat analysis. At RSA 2020, we partnered with Dell to showcase their AI Ready Bundle for Machine Learning, which includes NVIDIA GPUs, libraries and frameworks, and management software in a complete solution stack. We showcased human-machine teaming by walking conference goers through an object detection model creation used to filter publicly available data to identify physical threat hot spots, which may trigger cybercrime.

 

Vast seas of big data will be collected to train the neural networks used by the bots. What are some of the datasets that will be collected?

BAE Systems was recently awarded the Army’s Open Source Intelligence (OSINT) contract responsible for integrating big data capabilities into our secure cloud hosting environment.

 

Could you describe some of the current deep learning methodologies being worked on at BAE Systems?

Some of the deep learning methodologies we are working on are Motion Imagery, Humanitarian Disaster Relief, and COVID-19.

 

Do you believe that object detection, and classification, is still an issue when it comes to objects which are only partially visible or obscured by other objects?

Computer vision models are less effective when partially obscured, but for national mission initiatives like Foundational Military Intelligence, even high false positive rates could still support decision advantage.

 

What are some of the other challenges facing computer vision?

Data labeling is a challenge. We’ve partnered with several data labeling companies for labeling unclassified data, but for classified data we are using our intelligence analyst workforce to support these CV training initiatives and this workforce is a finite resource.

Thank you for this interview. For anyone who wishes to learn more they may visit BAE Systems.

Spread the love
Continue Reading