Connect with us

Interviews

Dr. Don Widener, Technical Director of BAE Systems’ Advanced Analytics Lab – Interview Series

mm

Updated

 on

Don Widener is the Technical Director of BAE Systems’ Advanced Analytics Lab and Intelligence, Surveillance & Reconnaissance (ISR) Analysis Portfolio.

BAE Systems is a global defense, aerospace and security company employing around 83,000 people worldwide. Their wide-ranging products and services cover air, land and naval forces, as well as advanced electronics, security, information technology, and support services.

What was it that initially attracted you personally to AI and robotics?

I’ve always been interested in augmenting the ability of intelligence analysts to be more effective in their mission, whether that is through trade-craft development or technology. With an intelligence analysis background myself, I’ve focused my career on closing the gap between intelligence data collection and decision making.

 

In August, 2019 BAE Systems announced a partnership with UiPath, to launch the Robotic Operations Center which will bring automation and machine learning capabilities to U.S. defense and intelligence communities. Could you describe this partnership?

Democratizing AI for our 2,000+ intelligence analysts is a prime driver for BAE Systems Intelligence & Security sector’s Advanced Analytics Lab. By using Robotic Process Automation (RPA) tools like UiPath we could rapidly augment our analysts with tailored training courses and communities of practice (like the Robotic Operations Center), driving gains in efficiency and effectiveness. Analysts with no programming foundation can build automation models or “bots” to address repetitive tasks.

 

How will the bots from the Robotic Operations Center be used to combat cybercrime?

There is a major need for applying AI to external threat data collection for Cyber Threat analysis. At RSA 2020, we partnered with Dell to showcase their AI Ready Bundle for Machine Learning, which includes NVIDIA GPUs, libraries and frameworks, and management software in a complete solution stack. We showcased human-machine teaming by walking conference goers through an object detection model creation used to filter publicly available data to identify physical threat hot spots, which may trigger cybercrime.

 

Vast seas of big data will be collected to train the neural networks used by the bots. What are some of the datasets that will be collected?

BAE Systems was recently awarded the Army’s Open Source Intelligence (OSINT) contract responsible for integrating big data capabilities into our secure cloud hosting environment.

 

Could you describe some of the current deep learning methodologies being worked on at BAE Systems?

Some of the deep learning methodologies we are working on are Motion Imagery, Humanitarian Disaster Relief, and COVID-19.

 

Do you believe that object detection, and classification, is still an issue when it comes to objects which are only partially visible or obscured by other objects?

Computer vision models are less effective when partially obscured, but for national mission initiatives like Foundational Military Intelligence, even high false positive rates could still support decision advantage.

 

What are some of the other challenges facing computer vision?

Data labeling is a challenge. We’ve partnered with several data labeling companies for labeling unclassified data, but for classified data we are using our intelligence analyst workforce to support these CV training initiatives and this workforce is a finite resource.

Thank you for this interview. For anyone who wishes to learn more they may visit BAE Systems.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.

Cybersecurity

AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level

mm

Updated

 on

A new report published by University College London aimed to identify the many different ways that AI could potentially assist criminals over the next 15 years. The report had 31 different AI experts take 20 different methods of using AI to carry out crimes and rank these methods based on various factors. The AI experts ranked the crimes according to variables like how easy the crime would be to commit, the potential societal harm the crime could do, the amount of money a criminal could make, and how difficult the crime would be to stop. According to the results of the report, Deepfakes posed the greatest threat to law-abiding citizens and society generally, as their potential for exploitation by criminals and terrorists is high.

The AI experts ranked deepfakes at the top of the list of potential AI threats because deepfakes are difficult to identify and counteract. Deepfakes are constantly getting better at fooling even the eyes of deepfake experts and even other AI-based methods of detecting deepfakes are often unreliable. In terms of their capacity for harm, deepfakes can easily be used by bad actors to discredit trusted, expert figures or to attempt to swindle people by posing as loved ones or other trusted individuals. If deepfakes are abundant, people could begin to lose trust in any audio or video media, which could make them lost faith in the validity of real events and facts.

Dr. Matthew Caldwell, from UCL Computer Science, was the first author on the paper. Caldwell underlines the growing danger of deepfakes as more and more of our activity moves online. As Caldwell was quoted by UCL News:

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”

The team of experts ranked five other emerging AI technologies as highly concerning potential catalysts for new kinds of crime: driverless vehicles being used as weapons, hack attacks on AI-controlled systems and devices, online data collection for the purposes of blackmail, AI-based phishing featuring customized messages, and fake news/misinformation in general.

According to Shane Johnson, the Director of the Dawes Centre for Future Crimes at UCL, the goal of the study was to identify possible threats associated with newly emerging technologies and hypothesize ways to get ahead of these threats. Johnson says that as the speed of technological change increases, it’s imperative that “we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur”.

Regarding the fourteen other possible crimes on the list, they were put into one of two categories: moderate concern and low concern.

AI crimes of moderate concern include the misuse of military robots, data poisoning, automated attack drones, learning-based cyberattacks, denial of service attacks for online activities, manipulating financial/stock markets, snake oil (sale of fraudulent services cloaked in AI/ML terminology), and tricking face recognition.

Low concern AI-based crimes include the forgery of art or music, AI-assisted stalking, fake reviews authored by AI, evading AI detection methods, and “burglar bots” (bots which break into people’s homes to steal things).

Of course, AI models themselves can be used to help combat some of these crimes. Recently, AI models have been deployed to assist in the detection of money laundering schemes, detecting suspicious financial transactions. The results are analyzed by human operators who then approve or deny the alert, and the feedback is used to better train the model. It seems likely that the future will involve AIs being pitted against one another, with criminals trying to design their best AI-assisted tools and security, law enforcement, and other ethical AI designers trying to design their own best AI systems.

Spread the love
Continue Reading

Autonomous Vehicles

Sarah Tatsis, VP, Advanced Technology Development Labs at BlackBerry – Interview Series

mm

Updated

 on

Sarah Tatsis, is the Vice President of Advanced Technology Development Labs at BlackBerry.

BlackBerry already secures more than 500M endpoints including 150M cars on the road. BlackBerry is leading the way with a single platform for securing, managing and optimizing how intelligent endpoints are deployed in the enterprise, enabling customers to stay ahead of the technology curve that will reshape every industry.

BlackBerry launched the Advanced Technology Development Lab (Blackberry Labs) in late 2019. What was the strategic importance of creating an entire new business division for BlackBerry?

As an innovation accelerator, BlackBerry Advanced Technology Development Labs is an intentional investment of 120 team members into the future of the company. The rise of the Internet of Things (IoT) alongside a dynamic threat landscape has fostered a climate where organizations have to guard against new threats and breaches at all times. We’ve handpicked the team to include experts in the embedded IoT space with diverse capabilities, including strong data science expertise, whose innovation funnel investigates, incubates and develops technologies to keep BlackBerry at the forefront of security innovation.  ATD Labs works in strong partnership with the other BlackBerry business units, such as QNX, to further the company’s commitment to safety, security and data privacy for its customers. BlackBerry Labs is also partnering with universities on active research and development. We’re quite proud of these initiatives and think they will greatly benefit our future roadmap.

Last year, BlackBerry Labs successfully integrated Cylance’s machine learning technology into BlackBerry’s product pipeline. BlackBerry Labs is currently focused on incubating and developing new concepts to accelerate the innovation roadmaps for our Spark and IoT business units.  My role is primarily helping to drive the innovation funnel and partner with our business units to deliver valuable solutions for our customers.

 

What type of products are being developed at BlackBerry Labs?

BlackBerry Labs is facilitating applied research and using insights gained to innovate in the lines of business where we’re already developing market-leading solutions. For instance, we’re applying machine learning and data science to our existing areas of application, including automotive, mobile security, etc. This is possible in large part due to the influx of BlackBerry Cylance technology and expertise, which allows us to combine our ML pipeline and market knowledge to create solutions that are securing information and devices in a really comprehensive way. As new technologies and threats emerge, BlackBerry Labs will allow us to take a proactive approach to cybersecurity, not only updating our existing solutions, but evaluating how we can branch out and provide a more comprehensive, data-based, and diverse portfolio to secure the Internet of Things.

At CES, for instance, we unveiled an AI-based transportation solution geared towards OEMs and commercial fleets. This solution provides a holistic view of the security and health of a vehicle and provides control over that security for a manufacturer or fleet manager. It also uses machine learning based continuous authentication to identify a driver of a vehicle based on past driving behavior.  Born in BlackBerry Labs, this concept marked the first time BlackBerry Cylance’s AI and ML technologies have been integrated with BlackBerry QNX solutions, which are currently powering upwards of 150 million vehicles on the road today.

For additional insights into how we envision AI and ML shaping the world of mobility in the years to come, I would encourage you to read ‘Security Confidence Through Artificial Intelligence and Machine Learning for Smart Mobility’ from our recently released ‘Road to Mobility’ guide. Also released at this year’s CES, The Road to Mobility: The 2020 Guide to Trends and Technology for Smart Cities and Transportation, is a comprehensive resource that government regulators, automotive executives and technology innovators can turn to for forward-thinking considerations for making safe and secure autonomous and connected vehicles a reality, delivering a transportation future that drivers, passengers and pedestrians alike can trust.

Featuring a mix of insights from both our own internal experts and recognized voices from across the transportation industry, the guide provides practical strategies for anyone who’s interested in playing a vital role in shaping what the vehicles and infrastructure of our shared autonomous future will look like.

 

How important is artificial intelligence to the future of BlackBerry?

As both IoT and cybersecurity risk explodes, traditional methods of keeping organizations, things, and people safe and secure are becoming unscalable and ineffective.  Preventing, detecting, and responding to potential threats needs to account for large amounts of data and intelligent automation of appropriate responses.  AI and data science include tools that address these challenges and are therefore critical to the roadmap of BlackBerry. These tools allow BlackBerry to provide even greater value to our customers by reducing risk in efficient ways.  BlackBerry leverages AI to deliver innovative solutions in the areas of cybersecurity, safety and data privacy as part of our strategy to connect, secure, and manage every endpoint in the Internet of Things.

For instance, BlackBerry trains our end point protection AI model against billions of files, good and bad, so that it learns to autonomously convict, or not convict files, pre-execution. The result of this massive, ongoing training effort is a proven track record of blocking payloads attempting to exploit zero-days for up to two years into the future.

The ability to protect organizations from zero-day payloads, well before they are developed and deployed, means that when other IT teams are scrambling to recover from the next major outbreak, it will be business as usual for BlackBerry customers. For example, WannaCry, which rendered millions of computers across the globe useless, was prevented by a BlackBerry (Cylance) machine learning model developed, trained, and deployed 24 months before the malware was first reported.

 

BlackBerry’s QNX software is embedded in more than 150 million cars. Can you discuss what this software does?

Our software provides the safe and secure software foundation for many of the systems within the vehicle. We have a broad portfolio of functional safety-certified software including our QNX operating system, development tools and middleware for autonomous and connected vehicles. In the automotive segment, the company’s software is deployed across the vehicle in systems such as ADAS and Safety Systems, Digital Cockpits, Digital Instrument Clusters, Infotainment, Telematics, Gateways, V2X and increasingly is being selected for chassis control and battery management systems that are advancing in complexity.

 

QNX software includes cybersecurity which protects autonomous vehicles from various cyber-attacks. Can you discuss some of the potential vulnerabilities that autonomous vehicles have to cyberattacks?

I think there is still a misconception out there that when you get into your car to drive home from work later today you might fall prey to a massive and coordinated vehicle cyberattack in which a rogue state threatens to hold you and your vehicle ransom unless you meet their demands. Hollywood movies are good at exaggerating what is possible, for example, instant and entire compromise of fleets that undermines all safety systems in cars. Whilst there are and always will be vulnerabilities within any system, to exploit a vulnerability and on scale with unprecedented reliability presents all kinds of hurdles that must be overcome, and would also require a significant investment of time, energy and resources. I think the general public needs to be reminded of this and the fact that hacking, if and when they do occur, are undesirable but not as movies would have you believe.

With a modern connected vehicle now containing well over 100 million lines of code and some of the most complex software ever deployed by automakers, the need for robust security has never been more important. As the software in a car grows so does the attack surface, which makes it more vulnerable to cyberattacks. Each poorly constructed piece of software represents a potential vulnerability that can be exploited by attackers.

BlackBerry is perfectly positioned to address these challenges as we have the solutions, the expertise and pedigree to be the safety certified and secure foundational software for autonomous and connected vehicles.

 

How does QNX software protect vehicles from these potential cyberattacks?

BlackBerry has a broad portfolio of products and services to protect vehicles against cybersecurity attacks. Our software has been deployed in critical embedded systems for over three decades and it’s worth pointing out, has also been certified to the highest level of automotive certification for functional safety with ISO 26262 ASIL D. As a company, we are investing significantly to broaden our safety and security product and services portfolio. Simply put, this is what our customers demand and rely on from us – a safe, secure and reliable software platform.

As it pertains to security, we firmly believe that security cannot be an afterthought. For automakers and the entire automotive supply chain, security should be inherent in the entire product lifecycle. As part of our ongoing commitment to security, we published a 7-Pillar Cybersecurity Recommendation to share our insight and expertise on this topic. In addition to our safety-certified and secure operating system and hypervisor, BlackBerry provides a host of security products– such as managed PKI, FIPS 140-2 certified toolkits, key inject tools, binary code static analysis tools, security credential management systems (SCMS), and secure Over-The-Air (OTA) software update technology. The world’s leading automakers, tier ones, and chip manufacturers continue to seek out BlackBerry’s safety-certified and highly-secure software for their next-generation vehicles. Together with our customers we will help to ensure that the future of mobility is safe, secure and built on trust.

 

Can you elaborate on what is the QNX Hypervisor?

The QNX® Hypervisor enables developers to partition, separate, and isolate safety-critical environments from non-safety critical environments reliably and securely; and to do so with the precision needed in an embedded production system. The QNX Hypervisor is also the world’s first ASIL D safety-certified commercial hypervisor.

 

What are some of the auto manufacturers using QNX software?

BlackBerry’s pedigree in safety, security, and continued innovation has led to its QNX technology being embedded in more than 150 million vehicles on the road today. It is used by the top seven automotive Tier 1s, and by 45+ OEMs including Audi, BMW, Ford, GM, Honda, Hyundai, Jaguar Land Rover, KIA, Maserati, Mercedes-Benz, Porsche, Toyota, and Volkswagen.

 

Is there anything else that you would like to share about Blackberry Labs?

BlackBerry is committed to constant and consistent innovation– it’s at the forefront of everything we do– but we also have a unique legacy of being one of the pioneers of mobile based security, and further the idea of a truly secure devices, endpoints, and communications.  The lessons we learned over the past decades, as well as the technology we developed, will be instrumental for helping us to create a new standard for privacy and security as the tsunami of connected devices enter the IoT. Much of what BlackBerry has done in the past is re-emerging in front of us, and we’re one of the only companies prioritizing a fundamental belief that all users deserve solutions that allow them to own their data and secure communications– it’s baked into our entire development pipeline and is one of our key differentiators.  BlackBerry Labs is combining this history with new technology innovations to address the rapidly expanding landscape of mobile and connected endpoints, including vehicles, and increased security threats. Through our strong partnerships with BlackBerry business units we are creating new features, products, and services to deliver value to both new and existing customers.

Thank you for the wonderful interview and for your extensive responses. It’s clear to me that Blackberry is at the forefront of technology and its best days are still ahead. Readers who wish to learn more should visit the Blackberry website.

Spread the love
Continue Reading

Artificial General Intelligence

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab – Interview Series

mm

Updated

 on

Vahid is an Assistant Professor of Computer Science and Data Science at the University of New Haven. He is also director of the Secure and Assured Intelligent Learning (SAIL) Lab

His research interests include safety and security of intelligent systems, psychological modeling of AI safety problems, security of complex adaptive systems, game theory, multi-agent systems, and cyber-security.

You have an extensive background in cybersecurity and keeping AI safe. Can you share your journey in how you became attracted to both fields?

My research trajectory has been fueled by two core interests of mine: finding out how things break, and learning about the mechanics of the human mind. I have been actively involved in cybersecurity since my early teen years, and consequently built my early research agenda around the classical problems of this domain. Few years into my graduate studies, I stumbled upon a rare opportunity to change my area of research. At that time, I had just come across the early works of Szegedy and Goodfellow on adversarial example attacks, and found the idea of attacking machine learning very intriguing. As I looked deeper into this problem, I came to learn about the more general field of AI safety and security, and found it to encompass many of my core interests, such as cybersecurity, cognitive sciences, economics, and philosophy. I also came to believe that research in this area is not only fascinating, but also vital for ensuring the long-term benefits and safety of the AI revolution.

 

You’re the director of the Secure and Assured Intelligent Learning (SAIL) Lab which works towards laying concrete foundations for the safety and security of intelligent machines. Could you go into some details regarding work undertaken by SAIL?

At SAIL, my students and I work on problems that lie in the intersection of security, AI, and complex systems. The primary focus of our research is on investigating the safety and security of intelligent systems, from both the theoretical and the applied perspectives. On the theoretical side, we are currently investigating the value-alignment problem in multi-agent settings and are developing mathematical tools to evaluate and optimize the objectives of AI agents with regards to stability and robust alignments. On the practical side, some of our projects explore the security vulnerabilities of the cutting-edge AI technologies, such as autonomous vehicles and algorithmic trading, and aim to develop techniques for evaluating and improving the resilience of such technologies to adversarial attacks.

We also work on the applications of machine learning in cybersecurity, such as automated penetration testing, early detection of intrusion attempts, and automated threat intelligence collection and analysis from open sources of data such as social media.

 

You recently led an effort to propose the modeling of AI safety problems as psychopathological disorders. Could you explain what this is?

This project addresses the rapidly growing complexity of AI agents and systems: it is already very difficult to diagnose, predict, and control unsafe behaviors of reinforcement learning agents in non-trivial settings by simply looking at their low-level configurations. In this work, we emphasize the need for higher-level abstractions in investigating such problems. Inspired by the scientific approaches to behavioral problems in humans, we propose psychopathology as a useful high-level abstraction for modeling and analyzing emergent deleterious behaviors in AI and AGI. As a proof of concept, we study the AI safety problem of reward hacking in an RL agent learning to play the classic game of Snake. We show that if we add a “drug” seed to the environment, the agent learns a sub-optimal behavior that can be described via neuroscientific models of addiction. This work also proposes control methodologies based on the treatment approaches used in psychiatry. For instance, we propose the use of artificially-generated reward signals as analogues of medication therapy for modifying the deleterious behavior of agents.

 

Do you have any concerns with AI safety when it comes to autonomous vehicles?

Autonomous vehicles are becoming prominent examples of deploying AI in cyber-physical systems. Considering the fundamental susceptibility of current machine learning technologies to mistakes and adversarial attacks, I am deeply concerned about the safety and security of even semi-autonomous vehicles. Also, the field of autonomous driving suffers from a serious lack of safety standards and evaluation protocols. However, I remain hopeful. Similar to natural intelligence, AI will also be prone to making mistakes. Yet, the objective of self-driving cars can still be satisfied if the rates and impact of such mistakes are made to be lower than those of human drivers. We are witnessing growing efforts to address these issues in the industry and academia, as well as the governments.

 

Hacking street signs with stickers or using other means can confuse the computer vision module of an autonomous vehicle. How big of an issue do you believe this is?

These stickers, and Adversarial Examples in general, give rise to fundamental challenges in the robustness of machine learning models. To quote George E. P. Box, “all models are wrong, but some are useful”. Adversarial examples exploit this “wrong”ness of models, which is due to their abstractive nature, as well as the limitations of sampled data upon which they are trained. Recent efforts in the domain of adversarial machine learning have resulted in tremendous strides towards increasing the resilience of deep learning models to such attacks. From a security point of view, there will always be a way to fool machine learning models. However, the practical objective of securing machine learning models is to increase the cost of implementing such attacks to the point of economic infeasibility.

 

Your focus is on the safety and security features of both deep learning and deep reinforcement learning. Why is this so important?

Reinforcement Learning (RL) is the prominent method of applying machine learning to control problems, which by definition involve the manipulation of their environment. Therefore, I believe systems that are based on RL have significantly higher risks of causing major damages in the real-world compared to other machine learning methods such as classification. This problem is further exacerbated with the integration of Deep learning in RL, which enables the adoption of RL in highly complex settings. Also, it is my opinion that the RL framework is closely related to the underlying mechanisms of cognition in human intelligence, and studying its safety and vulnerabilities can lead to better insights into the limits of decision-making in our minds.

 

Do you believe that we are close to achieving Artificial General Intelligence (AGI)?

This is a notoriously hard question to answer. I believe that we currently have the building blocks of some architectures that can facilitate the emergence of AGI. However, it may take a few more years or decades to improve upon these architectures and enhance the cost-efficiency of training and maintaining these architectures. Over the coming years, our agents are going to grow more intelligent at a rapidly growing rate. I don’t think the emergence of AGI will be announced in the form of a [scientifically valid] headline, but as the result of gradual progress. Also, I think we still do not have a widely accepted methodology to test and detect the existence of an AGI, and this may delay our realization of the first instances of AGI.

 

How do we maintain safety in an AGI system that is capable of thinking for itself and will most likely be exponentially more intelligent than humans?

I believe that the grant unified theory of intelligent behavior is economics and the study of how agents act and interact to achieve what they want. The decisions and actions of humans are determined by their objectives, their information, and the available resources. Societies and collaborative efforts are emergent from its benefits for individual members of such groups. Another example is the criminal code, that deters certain decisions by attaching a high cost to actions that may harm the society. In the same way, I believe that controlling the incentives and resources can enable the emergence a state of equilibrium between humans and instances of AGI. Currently, the AI safety community investigates this thesis under the umbrella of value-alignment problems.

 

One of the areas you closely follow is counterterrorism. Do you have concerns with terrorists taking over AI or AGI systems?

There are numerous concerns about the misuse of AI technologies. In the case of terrorist operations, the major concern is the ease with which terrorists can develop and carry out autonomous attacks. A growing number of my colleagues are actively warning against the risks of developing autonomous weapons (see https://autonomousweapons.org/ ). One of the main problems with AI-enabled weaponry is in the difficulty of controlling the underlying technology: AI is at the forefront of open-source research, and anyone with access to the internet and consumer-grade hardware can develop harmful AI systems. I suspect that the emergence of autonomous weapons is inevitable, and believe that there will soon be a need for new technological solutions to counter such weapons. This can result in a cat-and-mouse cycle that fuels the evolution of AI-enabled weapons, which may give rise to serious existential risks in the long-term.

 

What can we do to keep AI systems safe from these adversarial agents?

The first and foremost step is education: All AI engineers and practitioners need to learn about the vulnerabilities of AI technologies, and consider the relevant risks in the design and implementation of their systems. As for more technical recommendations, there are various proposals and solution concepts that can be employed. For example, training machine learning agents in adversarial settings can improve their resilience and robustness against evasion and policy manipulation attacks (e.g., see my paper titled “Whatever Does Not Kill Deep Reinforcement Learning, Makes it Stronger“). Another solution is to directly account for the risk of adversarial attacks in the architecture of the agent (e.g., Bayesian approaches to risk modeling). There is however a major gap in this area, and it’s the need for universal metrics and methodologies for evaluating the robustness of AI agents against adversarial attacks. Current solutions are mostly ad hoc, and fail to provide general measures of resilience against all types of attacks.

 

Is there anything else that you would like to share about any of these topics?

In 2014, Scully et al. published a paper at the NeurIPS conference with a very enlightening topic: “Machine Learning: The High-Interest Credit Card of Technical Debt“. Even with all the advancements of the field in the past few years, this statement has yet to lose its validity. Current state of AI and machine learning is nothing short of awe-inspiring, but we are yet to fill a significant number of major gaps in both the foundation and the engineering dimensions of AI. This fact, in my opinion, is the most important takeaway of our conversation. I of course do not mean to discourage the commercial adoption of AI technologies, but only wish to enable the engineering community to account for the risks and limits of current AI technologies in their decisions.

I really enjoyed learning about the safety and security challenges about different types of AI systems. This is trully something that individuals, corporations, and governments need to become aware of. Readers who wish to learn more should visit Secure and Assured Intelligent Learning (SAIL) Lab.

Spread the love
Continue Reading