Connect with us

Cybersecurity

AI Security Monitoring & Job Recruitment Companies Raise Funds

mm

Published

 on

VentureBeat reports on two new high fundings for startups developing artificial intelligence. Umbo Computer Vision (UCV) works on autonomous video security systems to businesses, while Xor is developing an AI chatbot platform for recruiters and job seekers. Both startups are located in San Francisco, and UCV is a joint venture with Taiwan and has a base there and in the UK too. UCV raised $8 million for its Ai-powered video security, while Xor managed to raise $8.4 million for its project.

Xor’s capital infusion came after a year in which the startup tripled its sales in the US, “reaching $2 million in annual recurring revenue and closing deals with over 100 customers in 15 countries, including ExxonMobil, Ikea, Baxter Personnel, Heineken, IBS, Aldi, Hoff, McDonald’s, and Mars.” As the company co-founder and CEO Aida Fazylova explains, she “started the company to let recruiters focus on the human touch — building relationships, interviewing candidates, and attracting the best talent to their companies. Meanwhile, AI takes care of repetitive tasks and provides 24/7 personalized service to every candidate. We are proud to get support from SignalFire and other amazing investors who help us drive our mission to make the recruitment experience better and more transparent for everyone.”

Xor’s chatbot “automates tedious job recruitment tasks, like scheduling interviews; sorting applications; and responding to questions via email, text, and messaging apps like Facebook Messenger and Skype. The eponymous Xor — which is hosted on Microsoft’s Azure — draws on over 500 sources for suitable candidates and screens those candidates autonomously, leveraging 103 different languages and algorithms trained on 17 different HR and recruitment data sets.”

According to Grand View Research, the chatbot market is expected to reach $1.23 billion by 2025, while Gartner predicts that chatbots will power 85% of all customer service interactions by the year 2020.

For its part, Umbo develops “ software, hardware, and AI smarts that can detect and identify human behaviors related to security, such as intrusion, tailgating (when an unauthorized individual follows someone into private premises), and wall-scaling.”

The company says it has developed its AI systems entirely in-house, and their system incorporates three components.“AiCameras are built in-house and feature built-in AI chips, connecting directly to the cloud to bypass servers and video recording intermediates, such as NVRs or DVRsLight is AI-powered software for detecting and issuing alerts on human-related security actions.” There is also “TruePlatform, a centralized platform where businesses can monitor and manage all their cameras, users, and security events.” As Shawn Guan, Umbo’s cofounder and CEO points out, the company launched  Umbo Light, “which implemented feedback that we gathered from our customers about what their primary wants from video security systems were. This allowed us to design and deliver a system based on the needs of those who use it most.”

The global video surveillance market, which is now practically relying on the use of AI, was pegged at $28 billion in 2017 and is expected to grow to more than $87 billion by 2025.

 

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Autonomous Vehicles

Sarah Tatsis, VP, Advanced Technology Development Labs at BlackBerry – Interview Series

mm

Published

on

Sarah Tatsis, is the Vice President of Advanced Technology Development Labs at BlackBerry.

BlackBerry already secures more than 500M endpoints including 150M cars on the road. BlackBerry is leading the way with a single platform for securing, managing and optimizing how intelligent endpoints are deployed in the enterprise, enabling customers to stay ahead of the technology curve that will reshape every industry.

BlackBerry launched the Advanced Technology Development Lab (Blackberry Labs) in late 2019. What was the strategic importance of creating an entire new business division for BlackBerry?

As an innovation accelerator, BlackBerry Advanced Technology Development Labs is an intentional investment of 120 team members into the future of the company. The rise of the Internet of Things (IoT) alongside a dynamic threat landscape has fostered a climate where organizations have to guard against new threats and breaches at all times. We’ve handpicked the team to include experts in the embedded IoT space with diverse capabilities, including strong data science expertise, whose innovation funnel investigates, incubates and develops technologies to keep BlackBerry at the forefront of security innovation.  ATD Labs works in strong partnership with the other BlackBerry business units, such as QNX, to further the company’s commitment to safety, security and data privacy for its customers. BlackBerry Labs is also partnering with universities on active research and development. We’re quite proud of these initiatives and think they will greatly benefit our future roadmap.

Last year, BlackBerry Labs successfully integrated Cylance’s machine learning technology into BlackBerry’s product pipeline. BlackBerry Labs is currently focused on incubating and developing new concepts to accelerate the innovation roadmaps for our Spark and IoT business units.  My role is primarily helping to drive the innovation funnel and partner with our business units to deliver valuable solutions for our customers.

 

What type of products are being developed at BlackBerry Labs?

BlackBerry Labs is facilitating applied research and using insights gained to innovate in the lines of business where we’re already developing market-leading solutions. For instance, we’re applying machine learning and data science to our existing areas of application, including automotive, mobile security, etc. This is possible in large part due to the influx of BlackBerry Cylance technology and expertise, which allows us to combine our ML pipeline and market knowledge to create solutions that are securing information and devices in a really comprehensive way. As new technologies and threats emerge, BlackBerry Labs will allow us to take a proactive approach to cybersecurity, not only updating our existing solutions, but evaluating how we can branch out and provide a more comprehensive, data-based, and diverse portfolio to secure the Internet of Things.

At CES, for instance, we unveiled an AI-based transportation solution geared towards OEMs and commercial fleets. This solution provides a holistic view of the security and health of a vehicle and provides control over that security for a manufacturer or fleet manager. It also uses machine learning based continuous authentication to identify a driver of a vehicle based on past driving behavior.  Born in BlackBerry Labs, this concept marked the first time BlackBerry Cylance’s AI and ML technologies have been integrated with BlackBerry QNX solutions, which are currently powering upwards of 150 million vehicles on the road today.

For additional insights into how we envision AI and ML shaping the world of mobility in the years to come, I would encourage you to read ‘Security Confidence Through Artificial Intelligence and Machine Learning for Smart Mobility’ from our recently released ‘Road to Mobility’ guide. Also released at this year’s CES, The Road to Mobility: The 2020 Guide to Trends and Technology for Smart Cities and Transportation, is a comprehensive resource that government regulators, automotive executives and technology innovators can turn to for forward-thinking considerations for making safe and secure autonomous and connected vehicles a reality, delivering a transportation future that drivers, passengers and pedestrians alike can trust.

Featuring a mix of insights from both our own internal experts and recognized voices from across the transportation industry, the guide provides practical strategies for anyone who’s interested in playing a vital role in shaping what the vehicles and infrastructure of our shared autonomous future will look like.

 

How important is artificial intelligence to the future of BlackBerry?

As both IoT and cybersecurity risk explodes, traditional methods of keeping organizations, things, and people safe and secure are becoming unscalable and ineffective.  Preventing, detecting, and responding to potential threats needs to account for large amounts of data and intelligent automation of appropriate responses.  AI and data science include tools that address these challenges and are therefore critical to the roadmap of BlackBerry. These tools allow BlackBerry to provide even greater value to our customers by reducing risk in efficient ways.  BlackBerry leverages AI to deliver innovative solutions in the areas of cybersecurity, safety and data privacy as part of our strategy to connect, secure, and manage every endpoint in the Internet of Things.

For instance, BlackBerry trains our end point protection AI model against billions of files, good and bad, so that it learns to autonomously convict, or not convict files, pre-execution. The result of this massive, ongoing training effort is a proven track record of blocking payloads attempting to exploit zero-days for up to two years into the future.

The ability to protect organizations from zero-day payloads, well before they are developed and deployed, means that when other IT teams are scrambling to recover from the next major outbreak, it will be business as usual for BlackBerry customers. For example, WannaCry, which rendered millions of computers across the globe useless, was prevented by a BlackBerry (Cylance) machine learning model developed, trained, and deployed 24 months before the malware was first reported.

 

BlackBerry’s QNX software is embedded in more than 150 million cars. Can you discuss what this software does?

Our software provides the safe and secure software foundation for many of the systems within the vehicle. We have a broad portfolio of functional safety-certified software including our QNX operating system, development tools and middleware for autonomous and connected vehicles. In the automotive segment, the company’s software is deployed across the vehicle in systems such as ADAS and Safety Systems, Digital Cockpits, Digital Instrument Clusters, Infotainment, Telematics, Gateways, V2X and increasingly is being selected for chassis control and battery management systems that are advancing in complexity.

 

QNX software includes cybersecurity which protects autonomous vehicles from various cyber-attacks. Can you discuss some of the potential vulnerabilities that autonomous vehicles have to cyberattacks?

I think there is still a misconception out there that when you get into your car to drive home from work later today you might fall prey to a massive and coordinated vehicle cyberattack in which a rogue state threatens to hold you and your vehicle ransom unless you meet their demands. Hollywood movies are good at exaggerating what is possible, for example, instant and entire compromise of fleets that undermines all safety systems in cars. Whilst there are and always will be vulnerabilities within any system, to exploit a vulnerability and on scale with unprecedented reliability presents all kinds of hurdles that must be overcome, and would also require a significant investment of time, energy and resources. I think the general public needs to be reminded of this and the fact that hacking, if and when they do occur, are undesirable but not as movies would have you believe.

With a modern connected vehicle now containing well over 100 million lines of code and some of the most complex software ever deployed by automakers, the need for robust security has never been more important. As the software in a car grows so does the attack surface, which makes it more vulnerable to cyberattacks. Each poorly constructed piece of software represents a potential vulnerability that can be exploited by attackers.

BlackBerry is perfectly positioned to address these challenges as we have the solutions, the expertise and pedigree to be the safety certified and secure foundational software for autonomous and connected vehicles.

 

How does QNX software protect vehicles from these potential cyberattacks?

BlackBerry has a broad portfolio of products and services to protect vehicles against cybersecurity attacks. Our software has been deployed in critical embedded systems for over three decades and it’s worth pointing out, has also been certified to the highest level of automotive certification for functional safety with ISO 26262 ASIL D. As a company, we are investing significantly to broaden our safety and security product and services portfolio. Simply put, this is what our customers demand and rely on from us – a safe, secure and reliable software platform.

As it pertains to security, we firmly believe that security cannot be an afterthought. For automakers and the entire automotive supply chain, security should be inherent in the entire product lifecycle. As part of our ongoing commitment to security, we published a 7-Pillar Cybersecurity Recommendation to share our insight and expertise on this topic. In addition to our safety-certified and secure operating system and hypervisor, BlackBerry provides a host of security products– such as managed PKI, FIPS 140-2 certified toolkits, key inject tools, binary code static analysis tools, security credential management systems (SCMS), and secure Over-The-Air (OTA) software update technology. The world’s leading automakers, tier ones, and chip manufacturers continue to seek out BlackBerry’s safety-certified and highly-secure software for their next-generation vehicles. Together with our customers we will help to ensure that the future of mobility is safe, secure and built on trust.

 

Can you elaborate on what is the QNX Hypervisor?

The QNX® Hypervisor enables developers to partition, separate, and isolate safety-critical environments from non-safety critical environments reliably and securely; and to do so with the precision needed in an embedded production system. The QNX Hypervisor is also the world’s first ASIL D safety-certified commercial hypervisor.

 

What are some of the auto manufacturers using QNX software?

BlackBerry’s pedigree in safety, security, and continued innovation has led to its QNX technology being embedded in more than 150 million vehicles on the road today. It is used by the top seven automotive Tier 1s, and by 45+ OEMs including Audi, BMW, Ford, GM, Honda, Hyundai, Jaguar Land Rover, KIA, Maserati, Mercedes-Benz, Porsche, Toyota, and Volkswagen.

 

Is there anything else that you would like to share about Blackberry Labs?

BlackBerry is committed to constant and consistent innovation– it’s at the forefront of everything we do– but we also have a unique legacy of being one of the pioneers of mobile based security, and further the idea of a truly secure devices, endpoints, and communications.  The lessons we learned over the past decades, as well as the technology we developed, will be instrumental for helping us to create a new standard for privacy and security as the tsunami of connected devices enter the IoT. Much of what BlackBerry has done in the past is re-emerging in front of us, and we’re one of the only companies prioritizing a fundamental belief that all users deserve solutions that allow them to own their data and secure communications– it’s baked into our entire development pipeline and is one of our key differentiators.  BlackBerry Labs is combining this history with new technology innovations to address the rapidly expanding landscape of mobile and connected endpoints, including vehicles, and increased security threats. Through our strong partnerships with BlackBerry business units we are creating new features, products, and services to deliver value to both new and existing customers.

Thank you for the wonderful interview and for your extensive responses. It’s clear to me that Blackberry is at the forefront of technology and its best days are still ahead. Readers who wish to learn more should visit the Blackberry website.

Spread the love
Continue Reading

Artificial General Intelligence

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab – Interview Series

mm

Published

on

Vahid is an Assistant Professor of Computer Science and Data Science at the University of New Haven. He is also director of the Secure and Assured Intelligent Learning (SAIL) Lab

His research interests include safety and security of intelligent systems, psychological modeling of AI safety problems, security of complex adaptive systems, game theory, multi-agent systems, and cyber-security.

You have an extensive background in cybersecurity and keeping AI safe. Can you share your journey in how you became attracted to both fields?

My research trajectory has been fueled by two core interests of mine: finding out how things break, and learning about the mechanics of the human mind. I have been actively involved in cybersecurity since my early teen years, and consequently built my early research agenda around the classical problems of this domain. Few years into my graduate studies, I stumbled upon a rare opportunity to change my area of research. At that time, I had just come across the early works of Szegedy and Goodfellow on adversarial example attacks, and found the idea of attacking machine learning very intriguing. As I looked deeper into this problem, I came to learn about the more general field of AI safety and security, and found it to encompass many of my core interests, such as cybersecurity, cognitive sciences, economics, and philosophy. I also came to believe that research in this area is not only fascinating, but also vital for ensuring the long-term benefits and safety of the AI revolution.

 

You’re the director of the Secure and Assured Intelligent Learning (SAIL) Lab which works towards laying concrete foundations for the safety and security of intelligent machines. Could you go into some details regarding work undertaken by SAIL?

At SAIL, my students and I work on problems that lie in the intersection of security, AI, and complex systems. The primary focus of our research is on investigating the safety and security of intelligent systems, from both the theoretical and the applied perspectives. On the theoretical side, we are currently investigating the value-alignment problem in multi-agent settings and are developing mathematical tools to evaluate and optimize the objectives of AI agents with regards to stability and robust alignments. On the practical side, some of our projects explore the security vulnerabilities of the cutting-edge AI technologies, such as autonomous vehicles and algorithmic trading, and aim to develop techniques for evaluating and improving the resilience of such technologies to adversarial attacks.

We also work on the applications of machine learning in cybersecurity, such as automated penetration testing, early detection of intrusion attempts, and automated threat intelligence collection and analysis from open sources of data such as social media.

 

You recently led an effort to propose the modeling of AI safety problems as psychopathological disorders. Could you explain what this is?

This project addresses the rapidly growing complexity of AI agents and systems: it is already very difficult to diagnose, predict, and control unsafe behaviors of reinforcement learning agents in non-trivial settings by simply looking at their low-level configurations. In this work, we emphasize the need for higher-level abstractions in investigating such problems. Inspired by the scientific approaches to behavioral problems in humans, we propose psychopathology as a useful high-level abstraction for modeling and analyzing emergent deleterious behaviors in AI and AGI. As a proof of concept, we study the AI safety problem of reward hacking in an RL agent learning to play the classic game of Snake. We show that if we add a “drug” seed to the environment, the agent learns a sub-optimal behavior that can be described via neuroscientific models of addiction. This work also proposes control methodologies based on the treatment approaches used in psychiatry. For instance, we propose the use of artificially-generated reward signals as analogues of medication therapy for modifying the deleterious behavior of agents.

 

Do you have any concerns with AI safety when it comes to autonomous vehicles?

Autonomous vehicles are becoming prominent examples of deploying AI in cyber-physical systems. Considering the fundamental susceptibility of current machine learning technologies to mistakes and adversarial attacks, I am deeply concerned about the safety and security of even semi-autonomous vehicles. Also, the field of autonomous driving suffers from a serious lack of safety standards and evaluation protocols. However, I remain hopeful. Similar to natural intelligence, AI will also be prone to making mistakes. Yet, the objective of self-driving cars can still be satisfied if the rates and impact of such mistakes are made to be lower than those of human drivers. We are witnessing growing efforts to address these issues in the industry and academia, as well as the governments.

 

Hacking street signs with stickers or using other means can confuse the computer vision module of an autonomous vehicle. How big of an issue do you believe this is?

These stickers, and Adversarial Examples in general, give rise to fundamental challenges in the robustness of machine learning models. To quote George E. P. Box, “all models are wrong, but some are useful”. Adversarial examples exploit this “wrong”ness of models, which is due to their abstractive nature, as well as the limitations of sampled data upon which they are trained. Recent efforts in the domain of adversarial machine learning have resulted in tremendous strides towards increasing the resilience of deep learning models to such attacks. From a security point of view, there will always be a way to fool machine learning models. However, the practical objective of securing machine learning models is to increase the cost of implementing such attacks to the point of economic infeasibility.

 

Your focus is on the safety and security features of both deep learning and deep reinforcement learning. Why is this so important?

Reinforcement Learning (RL) is the prominent method of applying machine learning to control problems, which by definition involve the manipulation of their environment. Therefore, I believe systems that are based on RL have significantly higher risks of causing major damages in the real-world compared to other machine learning methods such as classification. This problem is further exacerbated with the integration of Deep learning in RL, which enables the adoption of RL in highly complex settings. Also, it is my opinion that the RL framework is closely related to the underlying mechanisms of cognition in human intelligence, and studying its safety and vulnerabilities can lead to better insights into the limits of decision-making in our minds.

 

Do you believe that we are close to achieving Artificial General Intelligence (AGI)?

This is a notoriously hard question to answer. I believe that we currently have the building blocks of some architectures that can facilitate the emergence of AGI. However, it may take a few more years or decades to improve upon these architectures and enhance the cost-efficiency of training and maintaining these architectures. Over the coming years, our agents are going to grow more intelligent at a rapidly growing rate. I don’t think the emergence of AGI will be announced in the form of a [scientifically valid] headline, but as the result of gradual progress. Also, I think we still do not have a widely accepted methodology to test and detect the existence of an AGI, and this may delay our realization of the first instances of AGI.

 

How do we maintain safety in an AGI system that is capable of thinking for itself and will most likely be exponentially more intelligent than humans?

I believe that the grant unified theory of intelligent behavior is economics and the study of how agents act and interact to achieve what they want. The decisions and actions of humans are determined by their objectives, their information, and the available resources. Societies and collaborative efforts are emergent from its benefits for individual members of such groups. Another example is the criminal code, that deters certain decisions by attaching a high cost to actions that may harm the society. In the same way, I believe that controlling the incentives and resources can enable the emergence a state of equilibrium between humans and instances of AGI. Currently, the AI safety community investigates this thesis under the umbrella of value-alignment problems.

 

One of the areas you closely follow is counterterrorism. Do you have concerns with terrorists taking over AI or AGI systems?

There are numerous concerns about the misuse of AI technologies. In the case of terrorist operations, the major concern is the ease with which terrorists can develop and carry out autonomous attacks. A growing number of my colleagues are actively warning against the risks of developing autonomous weapons (see https://autonomousweapons.org/ ). One of the main problems with AI-enabled weaponry is in the difficulty of controlling the underlying technology: AI is at the forefront of open-source research, and anyone with access to the internet and consumer-grade hardware can develop harmful AI systems. I suspect that the emergence of autonomous weapons is inevitable, and believe that there will soon be a need for new technological solutions to counter such weapons. This can result in a cat-and-mouse cycle that fuels the evolution of AI-enabled weapons, which may give rise to serious existential risks in the long-term.

 

What can we do to keep AI systems safe from these adversarial agents?

The first and foremost step is education: All AI engineers and practitioners need to learn about the vulnerabilities of AI technologies, and consider the relevant risks in the design and implementation of their systems. As for more technical recommendations, there are various proposals and solution concepts that can be employed. For example, training machine learning agents in adversarial settings can improve their resilience and robustness against evasion and policy manipulation attacks (e.g., see my paper titled “Whatever Does Not Kill Deep Reinforcement Learning, Makes it Stronger“). Another solution is to directly account for the risk of adversarial attacks in the architecture of the agent (e.g., Bayesian approaches to risk modeling). There is however a major gap in this area, and it’s the need for universal metrics and methodologies for evaluating the robustness of AI agents against adversarial attacks. Current solutions are mostly ad hoc, and fail to provide general measures of resilience against all types of attacks.

 

Is there anything else that you would like to share about any of these topics?

In 2014, Scully et al. published a paper at the NeurIPS conference with a very enlightening topic: “Machine Learning: The High-Interest Credit Card of Technical Debt“. Even with all the advancements of the field in the past few years, this statement has yet to lose its validity. Current state of AI and machine learning is nothing short of awe-inspiring, but we are yet to fill a significant number of major gaps in both the foundation and the engineering dimensions of AI. This fact, in my opinion, is the most important takeaway of our conversation. I of course do not mean to discourage the commercial adoption of AI technologies, but only wish to enable the engineering community to account for the risks and limits of current AI technologies in their decisions.

I really enjoyed learning about the safety and security challenges about different types of AI systems. This is trully something that individuals, corporations, and governments need to become aware of. Readers who wish to learn more should visit Secure and Assured Intelligent Learning (SAIL) Lab.

Spread the love
Continue Reading

Cybersecurity

Awake Security Plans to Expand After Raising $36 Million

Published

on

The Santa Clara, California-based startup Awake Security plans to expand after raising $36 million in Series C funding. The company’s cybersecurity platform analyzes network traffic by using artificial intelligence (AI) and human expertise in order to identify internal and external threats. 

The company was founded in 2014 and has since secured around $80 million in total funding, including the Series C round. New investors include Evolution Equity Partners, Energize Ventures, and Liberty Global Ventures, with existing investors being Bain Capital Ventures and Greylock Partners.

“We’re partnering with Awake because we believe its platform can have a big impact in the industrial sector,” Juan Muldoon, partner at Energize Ventures, said. “The challenges with protecting critical infrastructure are changing rapidly, and as the attack surface for digital threats expands, so have the blind spots for many organizations.”

An internally led undisclosed Series B round brought in $12 million in 2018. 

“Awake has assembled the best minds in networking, machine learning, data science, cybersecurity, and other disciplines to create something entirely new that fills a massive void in the security market,” said Rahul Kashyap, CEO of Awake Security. “By partnering with Evolution Equity with its deep U.S. and European network and cybersecurity expertise, and strategic investors Energize Ventures and Liberty Global, we’re building on that momentum to bring the Awake platform to even more organizations around the globe.”

What Awake Security Can Do

Awake Security can identify all devices on a network, as well as whether the device is a phone, tablet, or something else. This allows transparency on networks, where companies can identify devices, users, and applications. The platform relies on machine learning to identify anomalous behaviors. 

The cybersecurity platform combines unsupervised, supervised, and federated machine learning, which uses decentralized data, in order to identify security threats. This is more effective than platforms that rely strictly on unsupervised data, which can result in false positives. 

Awake Security’s system allows security threats to be identified without overly-alerting security teams. Oftentimes, these teams can receive a large number of red flags due to safe behavior, such as individuals working from somewhere other than their usual locations. 

The company has revealed “the world’s first privacy-aware security expert system,” in Ava. According to the company’s website, “Ava combines federated machine learning (ML) with expertise from Awake threat researchers and security analysts to identify multi-stage attacks and enable automatic threat validation and triage.” 

COVID-19

The ongoing COVID-19 pandemic is causing an increase in cybersecurity threats around the world. Companies are not able to deal with cybersecurity issues as effectively as before, due to people not being in offices.

“COVID-19 is a prominent use case,” according to Evolution Equity partner Karthik Subramanian. “If we can identify attacks and compromises in this environment, hopefully we can do something about that. What has happened is the industry, as a whole, is moving toward smarter detection and response in a more timely manner.”

Subramanian led Cisco’s cybersecurity acquisition and investment team before joining Evolution Equity. 

“We invested in Awake because we recognize its unique ability to help organizations fight modern threats. The traction and the third-party recognition Awake has received combined with our resources in and knowledge of the U.S. and European markets only bolsters our conviction,” continued Subramanian.

Increased Cybersecurity Spending and Expansion

Outside of the issues brought on by the current pandemic, spending on cybersecurity is expected to increase modestly by 2023

Awake Security’s annual recurring revenue has increased by about 700 percent over the past year, with the company doubling its amount of employees. 

The plan is for Awake Security to expand after the Series C funding, with Europe as the target. Europe is currently experiencing a skills gap as well as an increase in automation, which makes cybersecurity even more important during this time.

 

Spread the love
Continue Reading