What started off as excitement around the capabilities of Generative AI has quickly turned to concern. Generative AI tools such as ChatGPT, Google Bard, Dall-E, etc. continue to make headlines due to security and privacy concerns. It’s even leading to questioning about what's real and what isn't. Generative AI can pump out highly plausible and therefore convincing content. So much so that at the conclusion of a recent 60 Minutes segment on AI, host Scott Pelley left viewers with this statement; “We’ll end with a note that has never appeared on 60 Minutes, but one, in the AI revolution, you may be hearing often: the preceding was created with 100% human content.”
The Generative AI cyber war begins with this convincing and real-life content and the battlefield is where hackers are leveraging Generative AI, using tools such as ChatGPT, etc. It’s extremely easy for cyber criminals, especially those with limited resources and zero technical knowledge, to carry out their crimes through social engineering, phishing and impersonation attacks.
Generative AI has the power to fuel increasingly more sophisticated cyberattacks.
Because the technology can produce such convincing and human-like content with ease, new cyber scams leveraging AI are harder for security teams to easily spot. AI-generated scams can come in the form of social engineering attacks such as multi-channel phishing attacks conducted over email and messaging apps. A real-world example could be an email or message containing a document that is sent to a corporate executive from a third party vendor via Outlook (Email) or Slack (Messaging App). The email or message directs them to click on it to view an invoice. With Generative AI, it can be almost impossible to distinguish between a fake and real email or message. Which is why it’s so dangerous.
One of the most alarming examples, however, is that with Generative AI, cybercriminals can produce attacks across multiple languages – regardless of whether the hacker actually speaks the language. The goal is to cast a wide net and cybercriminals won’t discriminate against victims based on language.
The advancement of Generative AI signals that the scale and efficiency of these attacks will continue to rise.
Cyber defense for Generative AI has notoriously been the missing piece to the puzzle. Until now. By using machine to machine combat, or pinning AI against AI, we can defend against this new and growing threat. But how should this strategy be defined and how does it look?
First, the industry must act to pin computer against computer instead of human vs computer. To follow through on this effort, we must consider advanced detection platforms that can detect AI-generated threats, reduce the time it takes to flag and the time it takes to solve a social engineering attack that originated from Generative AI. Something a human is unable to do.
We recently conducted a test of how this can look. We had ChatGPT cook up a language-based callback phishing email in multiple languages to see if a Natural Language Understanding platform or advanced detection platform could detect it. We gave ChatGPT the prompt, “write an urgent email urging someone to call about a final notice on a software license agreement.” We also commanded it to write it in English and Japanese.
The advanced detection platform was immediately able to flag the emails as a social engineering attack. BUT, native email controls such as Outlook’s phishing detection platform could not. Even before the release of ChatGPT, social engineering done via conversational, language-based attacks proved successful because they could dodge traditional controls, landing in inboxes without a link or payload. So yes, it takes machine vs. machine combat to defend, but we must also be sure that we are using effective artillery, such as an advanced detection platform. Anyone with these tools at their disposal has an advantage in the fight against Generative AI.
When it comes to the scale and plausibility of social engineering attacks afforded by ChatGPT and other forms of Generative AI, machine to machine defense can also be refined. For example, this defense can be deployed in multiple languages. It also doesn't just have to be limited to email security but can be used for other communication channels such as apps like Slack, WhatsApp, Teams etc.
When scrolling through LinkedIn, one of our employees came across a Generative AI social engineering attempt. A strange “whitepaper” download ad appeared with what can only be described generously as “bizarro” ad creative. Upon closer inspection, the employee saw a telltale color pattern in the lower right corner stamped on images produced by Dall-E, an AI model that generates images from text-based prompts.
Encountering this fake LinkedIn ad was a significant reminder of new social engineering dangers now appearing when coupled with Generative AI. It’s more critical than ever to be vigilant and suspicious.
The age of generative AI being used for cybercrime is here, and we must remain vigilant and be prepared to fight back with every tool at our disposal.
- The Black Box Problem in LLMs: Challenges and Emerging Solutions
- Alex Ratner, CEO & Co-Founder of Snorkel AI – Interview Series
- Circleboom Review: The Best AI-Powered Social Media Tool?
- Stable Video Diffusion: Latent Video Diffusion Models to Large Datasets
- Donny White, CEO & Co-Founder of Satisfi Labs – Interview Series