Thought Leaders
On AI, Patience Is a Virtue
In the nearly two years since ChatGPT launched, generative artificial intelligence has run through an entire technology hype cycle, from lofty, society-changing expectations to fueling a recent stock market correction. But within the cybersecurity industry specifically, the excitement around Generative AI (genAI) is still justified; it just might take longer than investors and analysts anticipated to change the sector entirely.
The clearest, most recent sign of the shift in hype was at the Black Hat USA Conference in early August, at which generative AI played a very small role in product launches, demonstrations and general buzz-creation. Compared to the RSA Conference just four months earlier featuring the same vendors, Black Hat’s focus on AI was negligible, which would reasonably lead neutral observers to believe that the industry is moving on or that AI has become a commodity. But that's not quite the case.
Here’s what I mean. The transformative benefit of applying generative AI within the cybersecurity industry likely won’t come from generic chatbots or quickly layering AI over data processing models. These are the building blocks to more advanced and efficient use cases, but right now, they’re not specialized for the security industry, and as a result aren’t driving a new wave of optimal security outcomes for customers. Rather, the real transformation that AI will provide for the security industry will take place when AI models are customized and tuned for security use cases.
Current general AI use cases in security largely employ prompt engineering and Retrieval-Augmented Generation, which is an AI framework that essentially enables large language models (LLMs) to tap additional data resources outside of their training data, combining the best parts of generative AI and database retrieval. The utility of these varies greatly depending on the use case and how well a vendor’s existing data processing supports the use case; hey are not “magic.” This is true for other applications that require proprietary data and expertise that is not prevalent on the Internet, such as medical diagnosis and legal work. It seems likely that companies will adjust data processing pipelines and data access systems to optimize generative AI use cases. Also, generative AI companies are encouraging the development of specially-tuned models, although it remains to be seen how well this will work for uses where quality and detail are essential.
There’s a few reasons why this specialization will take time to take effect in the security industry, though. One primary reason is that customizing these models requires many humans in the loop during training that are subject matter experts in cybersecurity and AI, two industries struggling to hire enough talent. The cybersecurity industry is short roughly four million professionals worldwide, according to the World Economic Forum, and Reuters estimates that there will be a 50% hiring gap for AI-related positions in the near future.
Without an abundance of experts available, the precise work needed to tailor AI models to work within a security context will be slowed. The cost to perform the data science necessary to train these models also limits the number of organizations that have the resources to conduct research into custom AI modeling. It takes millions of dollars to afford the processing power that cutting-edge AI models require, and that money must come from somewhere. Even when an organization has the resources and team to fuel research into AI customization, the actual forward progress doesn’t happen overnight. It will take time to figure out how to best augment AI models to benefit security practitioners and analysts, and as with any new tool, there will be a learning curve when security-specific natural language processors, chatbots and other AI-assisted integrations are introduced.
Generative AI is still poised to shift the world of cybersecurity into a new paradigm, where the offensive AI capabilities that adversaries and threat actors leverage will be competing with security providers’ AI models built to detect and monitor for threats. The research and development necessary to fuel that shift is just going to take a while longer than the general technology community has anticipated.