stub Acronis SCS and Leading Academics Partner to Develop AI-based Risk Scoring Model - Unite.AI
Connect with us

Cybersecurity

Acronis SCS and Leading Academics Partner to Develop AI-based Risk Scoring Model

Published

 on

U.S. cyber protection company Acronis SCS has partnered with leading academics to improve software through the use of artificial intelligence (AI). The collaboration developed an AI-based risk scoring model capable of quantitatively assessing software code vulnerability. 

The new model demonstrated a 41% improvement at detecting common vulnerabilities and exposures (CVEs) during its first stage of analysis. The following tests resulted in equally impressive results, and Acronis SCS is set to share the model upon its completion. 

Software Vendors and Public Sector

One of the greatest aspects of this technology is that it can be utilized by other software vendors and public sector organizations. Through its use, software supply chain validation can be improved without hurting innovation or small business opportunity, and it is an affordable tool for these organizations. 

Acronis SCS’ AI-based model relies on a deep learning neural network that scans through both open-source and proprietary source code. It can provide impartial quantitative risk cores that IT administrators can then use to make accurate decisions involving the deployment of new software packages and updating existing ones. 

The company uses language model to embed code. A type of deep learning, language model combines an embedding layer with a recurrent neural network (RNN). Up-sampling techniques and classification algorithms such as boosting, random forests, and neural networks are used to measure the model. 

Dr. Joe Barr is Acronis SCS’ Senior Director of Research. 

“We use language model to embed code. Language model is a form of deep learning which combines an embedding layer with recurrent neural network (RNN),” Dr. Barr told Unite.AI. 

“The input consists of function pairs (function, tag) and the output is a probability P(y=1 | x) that a function is vulnerable to hack (buggy). Because positive tags are rare, we use various up-sampling techniques and classification algorithms (like boosting, random forests and neural networks). We measure “goodness” by ROC/AUC and a percentile lift (number of “bads” in top k percentile, k=1,2,3,4,5).”

Efficient Validation Process

Another great opportunity for this technology is its ability to make the validation process far more efficient. 

“Supply chain validation, placed inside a validation process, will help identify buggy/vulnerable code and will make the validation process more efficient by several orders of magnitude,” he continued. 

As with all AI and software, it is crucial to understand and address any potential risks. When asked if there are any risks unique to open source software (OSS), Dr. Barr said there are both generic and specific. 

“There are generic risks and specific risks,” he said. “The generic risk includes “innocent” bugs in the code which may be exploited by a nefarious actor. Specific risks relate to an adversarial actor (like state-sponsored agency) who deliberately introduces bugs into open source to be exploited at some point.”

The initial results of the analysis was published in IEEE titled “Combinatorial Code Classification & Vulnerability.”

 

Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.